This is a presentation given for a HealthData.gov Developer Challenge; see
http://www.health2con.com/devchallenge/health-data-platform-metadata-challenge/
and/or
http://www.health2con.com/devchallenge/health-data-platform-simple-sign-on-challenge/
(both links contain the same embedded video and deck)
Realizing the GPRAMA using Government Linked DataGeorge Thomas
This presentation was given at the 2011 DoD symposium on SOA & Semantic Technology, and demonstrates the use of open standard metadata tags to implement the Government Performance and Results Act Modernization Act (GPRAMA) using topical examples like cloud computing, and the meaningful use of electronic health record exchanges.
This presentation was provided by Michael Roberts of Emerald Group Publishing during the NISO event, Enabling Discovery, Part One: Publishers and Libraries Talk Metadata & Monographs, held on January 14, 2019.
Presentation given at Supercomputing 2007 on the progress of data sharing models, specifically highlighting the collision of data grid / data service and Web 2.0 worlds.
Phil Bourne, Protein Data Bank; Data Publication Repositories; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Interacting with Linked Data to Facilitate its SustainabilityRoberto García
Presentation about the importance of user participation for the sustainability of Linked Data publishing. It also shows an approach to automatic User Interface generation for Linked Data that then facilitates users participation.
Realizing the GPRAMA using Government Linked DataGeorge Thomas
This presentation was given at the 2011 DoD symposium on SOA & Semantic Technology, and demonstrates the use of open standard metadata tags to implement the Government Performance and Results Act Modernization Act (GPRAMA) using topical examples like cloud computing, and the meaningful use of electronic health record exchanges.
This presentation was provided by Michael Roberts of Emerald Group Publishing during the NISO event, Enabling Discovery, Part One: Publishers and Libraries Talk Metadata & Monographs, held on January 14, 2019.
Presentation given at Supercomputing 2007 on the progress of data sharing models, specifically highlighting the collision of data grid / data service and Web 2.0 worlds.
Phil Bourne, Protein Data Bank; Data Publication Repositories; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Interacting with Linked Data to Facilitate its SustainabilityRoberto García
Presentation about the importance of user participation for the sustainability of Linked Data publishing. It also shows an approach to automatic User Interface generation for Linked Data that then facilitates users participation.
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Citadelh2020
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
https://twitter.com/Citadelh2020
https://twitter.com/gayane_sedraky
https://twitter.com/imec_int
https://twitter.com/IDLabResearch
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Gayane Sedrakyan
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
Enabling Self-service Data Provisioning Through Semantic Enrichment of Data |...Ahmad Assaf
Publicly available datasets contain knowledge from various domains such as encyclopedic, government, geographic, entertainment and so on. The increasing diversity of these datasets makes it difficult to annotate them with a fixed number of pre-defined tags. Moreover, manually entered tags are subjective and may not capture their essence and breadth. We propose a mechanism to automatically attach meta information to data objects by leveraging knowledge bases like DBpedia and Freebase which facilitates data search and acquisition for business users.
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked datasets on the web. In order to benefit from this mine of data, one needs to access to descriptive information about each dataset (or metadata). This metadata enables dataset discovery, understanding, integration and maintenance. Data portals, which are datasets' access points, offer metadata represented in different and heterogeneous models. We first propose a harmonized dataset model based on a systematic literature survey that enables complete metadata coverage to enable data discovery, exploration and reuse by business users. Second, rich metadata information is currently very limited to a few data portals where they are usually provided manually, thus being often incomplete and inconsistent in terms of quality. We propose a scalable automatic approach for extracting, validating, correcting and generating descriptive linked dataset profiles. This approach applies several techniques in order to check the validity of the metadata provided and to generate descriptive and statistical information for a particular dataset or for an entire data portal.
Traditional data quality is a thoroughly researched field with several benchmarks and frameworks to grasp its dimensions. Ensuring data quality in Linked Open Data is much more complex. It consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. We propose an objective assessment framework for Linked Data quality based on quality metrics that can be automatically measured. We further present an extensible quality measurement tool implementing this framework that helps on one hand data owners to rate the quality of their datasets and get some hints on possible improvements, and on the other hand data consumers to choose their data sources from a ranked set.
A new model for interoperable administrative dataRob Worthington
This presentation shares Kwantu's work on interoperable administrative systems. It was given at the Global Partnership for Sustainable Development Data National Data Roadmap Workshop in Costa Rica in 2018.
Web Services Discovery and Recommendation Based on Information Extraction and...ijwscjournal
This paper shows that the problem of web services representation is crucial and analyzes the various
factors that influence on it. It presents the traditional representation of web services considering traditional
textual descriptions based on the information contained in WSDL files. Unfortunately, textual web services
descriptions are dirty and need significant cleaning to keep only useful information. To deal with this
problem, we introduce rules based text tagging method, which allows filtering web service description to
keep only significant information. A new representation based on such filtered data is then introduced.
Many web services have empty descriptions. Also, we consider web services representations based on the
WSDL file structure (types, attributes, etc.). Alternatively, we introduce a new representation called
symbolic reputation, which is computed from relationships between web services. The impact of the use of
these representations on web service discovery and recommendation is studied and discussed in the
experimentation using real world web services.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Presentation on a new system for regional document/information management, given by Timo Baur, CCCCC, Belize at the Marketplace for techies session at the 4th I-K-Mediary workshop in Bangladesh, January 2011.
HDI III - Healthdata.gov - Now, Next and ChallengesGeorge Thomas
This is a presentation that will be given at the 2012 Health Datapalooza (http://hdiforum.org), describing the new healthdata.gov site, its PaaS/DaaS direction, and related i2/ONC developer challenges.
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Citadelh2020
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
https://twitter.com/Citadelh2020
https://twitter.com/gayane_sedraky
https://twitter.com/imec_int
https://twitter.com/IDLabResearch
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Gayane Sedrakyan
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
Enabling Self-service Data Provisioning Through Semantic Enrichment of Data |...Ahmad Assaf
Publicly available datasets contain knowledge from various domains such as encyclopedic, government, geographic, entertainment and so on. The increasing diversity of these datasets makes it difficult to annotate them with a fixed number of pre-defined tags. Moreover, manually entered tags are subjective and may not capture their essence and breadth. We propose a mechanism to automatically attach meta information to data objects by leveraging knowledge bases like DBpedia and Freebase which facilitates data search and acquisition for business users.
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked datasets on the web. In order to benefit from this mine of data, one needs to access to descriptive information about each dataset (or metadata). This metadata enables dataset discovery, understanding, integration and maintenance. Data portals, which are datasets' access points, offer metadata represented in different and heterogeneous models. We first propose a harmonized dataset model based on a systematic literature survey that enables complete metadata coverage to enable data discovery, exploration and reuse by business users. Second, rich metadata information is currently very limited to a few data portals where they are usually provided manually, thus being often incomplete and inconsistent in terms of quality. We propose a scalable automatic approach for extracting, validating, correcting and generating descriptive linked dataset profiles. This approach applies several techniques in order to check the validity of the metadata provided and to generate descriptive and statistical information for a particular dataset or for an entire data portal.
Traditional data quality is a thoroughly researched field with several benchmarks and frameworks to grasp its dimensions. Ensuring data quality in Linked Open Data is much more complex. It consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. We propose an objective assessment framework for Linked Data quality based on quality metrics that can be automatically measured. We further present an extensible quality measurement tool implementing this framework that helps on one hand data owners to rate the quality of their datasets and get some hints on possible improvements, and on the other hand data consumers to choose their data sources from a ranked set.
A new model for interoperable administrative dataRob Worthington
This presentation shares Kwantu's work on interoperable administrative systems. It was given at the Global Partnership for Sustainable Development Data National Data Roadmap Workshop in Costa Rica in 2018.
Web Services Discovery and Recommendation Based on Information Extraction and...ijwscjournal
This paper shows that the problem of web services representation is crucial and analyzes the various
factors that influence on it. It presents the traditional representation of web services considering traditional
textual descriptions based on the information contained in WSDL files. Unfortunately, textual web services
descriptions are dirty and need significant cleaning to keep only useful information. To deal with this
problem, we introduce rules based text tagging method, which allows filtering web service description to
keep only significant information. A new representation based on such filtered data is then introduced.
Many web services have empty descriptions. Also, we consider web services representations based on the
WSDL file structure (types, attributes, etc.). Alternatively, we introduce a new representation called
symbolic reputation, which is computed from relationships between web services. The impact of the use of
these representations on web service discovery and recommendation is studied and discussed in the
experimentation using real world web services.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Presentation on a new system for regional document/information management, given by Timo Baur, CCCCC, Belize at the Marketplace for techies session at the 4th I-K-Mediary workshop in Bangladesh, January 2011.
HDI III - Healthdata.gov - Now, Next and ChallengesGeorge Thomas
This is a presentation that will be given at the 2012 Health Datapalooza (http://hdiforum.org), describing the new healthdata.gov site, its PaaS/DaaS direction, and related i2/ONC developer challenges.
Continuous Delivery and Micro Services - A SymbiosisEberhard Wolff
Continuous Delivery profits from Micro Services - and the other way round. This presentation shows how the two technologies work together - and how Micro Services can be used to simplify the transition to Continuous Delivery.
GigaByte Chief Editor Scott Edmunds presents on how to prepare a data paper for the TDR and WHO sponsored call for data papers describing datasets on vectors of human diseases launched in Nov 2021. Presented at the GBIF webinar on 25th January 2022 and aimed at authors interested in submitting a manuscript submitted to the series.
SC6 Workshop 1: Big Data Europe platform requirements and draft architecture:...BigData_Europe
Presentation by Martin Kaltenböck, Semantic Web Company, at the first workshop of Societal Challlenge 6 in the BigDataEurope project, taking place in Luxembourg on 18 November 2015.
http://www.big-data-europe.eu/social-sciences/
CTO Perspectives: What's Next for Data Management and Healthcare?Health Catalyst
Health Catalyst's Chief Technology Officer, Bryan Hinton, shares his perspective, thoughts, and insights on new and emerging trends for data management in healthcare. Bryan offers a brief presentation on what hospitals and healthcare systems can expect, followed by an extended Q&A.
Impact of DDOD on Data Quality - White House 2016David Portnoy
"The Impact of Demand-Driven Open Data (DDOD) on Data Quality" was presented on April 27, 2016 at Open Data Roundtable held at the White House Office of Science and Technology Policy.
It discusses the data quality problems prevalent in open data and their impact, the origins of the DDOD concept, how it works, progress towards its goals, several use case examples, and how to implement it at other organizations.
More information:
* DDOD http://ddod.healthdata.gov
* Open Data Roundtables https://www.data.gov/meta/open-data-roundtables/
* White House Office of Science and Technology Policy: https://www.whitehouse.gov/blog/2016/02/05/open-data-empowering-americans-make-data-driven-decisions
HOBBIT project overview presented at European Big Data Value Forum, 21-23 Nov 2017, held in Versailles, France (Palais des Congres).
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
The Container Evolution of a Global Fortune 500 Company with Docker EEDocker, Inc.
In our new digital economy, keeping up can feel like a never-ending expansion of costly technical overhead. Each “trend” adds net-new operational and capital expenses to seemingly bloated run-rate measures - already challenged by leadership. Containers may feel like just another one of these trends, bringing its own additional expense. At MetLife, however, we sought to make containerization self-funding, allowing us to fuel change and tap into innovation at a large-scale. To do this, MetLife’s ModSquad, challenged established norms to prove that containers worked through production. Then, we asked Docker for help to modernize our traditional landscape to create funding sources to adopt containers, change holistically, and reduce overhead to our bottom line.
This talk picks up where the MetLife story presented at the Austin DockerCon ends: What happens after you’ve done one thing well and you need to expand the revolution? We'll discuss how MetLife leveraged the Modernize Traditional App Program. We’ll discuss planning, preparation, execution and our post-mortem learnings in addition to technical obstacles, mindsets, roles, addressing executive concerns and training. I’ll share how we created regional business cases and roadmaps to create a funding pipeline by technology. Finally, we’ll look at our new forecast and ultimately our new future.
Implementing the Open Government Directive using the technologies of the Soci...George Thomas
This presentation demonstrates the use of Semantic Web technologies with Social Networking tools, considering metadata specifications as Social Media. Example ontologies and instance data from the Capital Planning and Investment Control and Business Motivation are created that link 'what' (Agency IT investments) with 'why' (Agency goals and objectives), using a simple linking ontology. Knowledge Workers use a Semantic Halo Mediawiki to curate the data.
This presentation is the culmination of my detail to the E-Government Office in the US Office of Management and Budget and the work I did to evolve and mature initiatives like recovery.gov and data.gov.
'Transparency, Participation, Collaboration'
Solution Architecture works in progress for recovery.gov
This is a presentation I gave at the Sunlight Foundations http://transparencycamp.org/ on 2/28/09.
With respect to whether the ideas and approaches I've expressed and advocated here will ultimately be realized by those now responsible for managing and operating this initiative - Caveat Venditor/Emptor.
A presentation that captures some basic ideas about connecting planning data with spending data, part of my OMB detail in support of the Obama Administration transparency and open Government goals.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. On today’s call:
Adam Wong Hemali Thakkar George
Thomas
Management and Manager, Developer
Program Analyst Chief Architect
Challenge
ONC
Health 2.0 HealthData.gov
3. Agenda for Today’s Meeting
ONC and the Investing in Innovation (i2)
Program
AnIntroduction to the (first two of seven)
HealthData.gov Domain and Platform
Challenges
Q&A About the Challenges
5. i2 Goals
• Better Health, Better Care, Better Value through Quality
Improvement
– Further the mission of the Department of Health and Human Services
– Highlight programs, activities, and issues of concern
• Spur Innovation and Highlight Excellence
– Motivate, inspire, and lead
• Community building – Development of ecosystem
• Stimulate private sector investment
6. What is HealthData.gov?
• “HealthData.gov is a public resource designed
to bring liberated health datasets, innovation
challenges, and applications and tools to the
public to help increase public knowledge and
solve problems in health.”
– Todd Park, US Federal CTO
• source
7. HealthData.gov i2 Challenges
• two types
– three domain specific
• improve the integration and liquidity of data made
available
– four platform specific
• enhance the capabilities of the technology components
• 3 rounds
– sequenced to leverage dependencies
• round 1: June through October 2102
• round 2: November 2012 through May 2013
• round 3: June through December 2013
8. HealthData.gov i2 Challenges
• June 2012 through October 2012
– Metadata (domain)
• apply cross domain from voluntary consensus standards
organizations and defacto standards, design other
domain specific metadata schemata
– HealthData.gov blog post, Challenge.gov listing
– Simplified Sign On (platform)
• enhance HDP infrastructure components with WebID
identity provider and relying party capabilities
– HealthData.gov blog post, Challenge.gov listing
– $35K: $20K 1st, $10K 2nd, $5K 3rd place (each)
9. First Domain Challenge
• Metadata
– requests the application of existing voluntary
consensus standards for metadata common to all
open government data
– and invites new designs for health domain specific
metadata to classify datasets in our growing
catalog, creating entities, attributes and relations
– that form the foundations for better discovery,
integration and liquidity.
12. Domain Specific Examples
• Centers for Medicare & Medicaid Services
– Hospital Compare (Health Datapalooza 2011)
– see blog post and presentation
• from data.gov.uk
– environmental data (‘bathing water’, or beaches)
– ‘Web 3.0’ API example
13. First Platform Challenge
• WebID based SSO
– will improve community engagement
– by providing simplified sign on (SSO) for external
users interacting across multiple HDP technology
components,
– making it easier for community collaborators to
contribute,
– leveraging new approaches to decentralized
authentication.
14. About WebID
• Leverages existing Web infrastructure
– X.509 certificates and TLS
• A 'mirrored claims' approach to authentication
– externalizing LDAP, a human/app ‘API key’
• for more info, see
– http://webid.info/
• http://www.w3.org/2005/Incubator/webid/spec/
• http://www.w3.org/2005/Incubator/webid/wiki
• http://www.w3.org/2005/Incubator/webid/wiki/Implementatio
ns
15. Where we’re going with WebID
• see (platform) challenge 4
– enabling very flexible and fine-grained access
control
• leads to data centric authorization
– PCAST Health IT Report
• 'data element access service’
– “secure the data, not just the devices”
• US Fed CIO Steven VanRoekel
16. Timeline for both Challenges
• Submission Period Ends: October
2, 2012
• Winners Notified: Early November