Apresentação realizada no curso de Melhores práticas para dados na Web (Data on the Web Best Practices - DWBP), em San Jose, Costa Rica. Fevereiro 2017
Rachael LammeyCrossref Mary Hirsch DataCite
The underlying data created and/or reused and remixed for research is becoming as crucial as the resulting text-based output. This is your opportunity to dig into the what, the why, and the how of data publication, data citation, and data sharing. Workshop hosts will cover this topic from a range of perspectives. Let’s review the best practices and case studies in data citation and data publishing, add to our collective understanding of why this is so important, and contribute to the next steps in building solutions to improving infrastructure for research data
Palestra, em inglês, "Publishing Data on the Web" sobre o documento Data on the Web Best Practices, apresentada na Semana de Metodologia NIC.br, em São Paulo, dia 12 de abril de 2016.
How DataCite and Crossref Support Research Data Sharing - Crossref LIVE HannoverCrossref
Britta Dreyer from DataCite presents on how DataCite and Crossref collaboratively support research data sharing. Presented at Crossref LIVE Hannover, June 27th 2018.
Rachael LammeyCrossref Mary Hirsch DataCite
The underlying data created and/or reused and remixed for research is becoming as crucial as the resulting text-based output. This is your opportunity to dig into the what, the why, and the how of data publication, data citation, and data sharing. Workshop hosts will cover this topic from a range of perspectives. Let’s review the best practices and case studies in data citation and data publishing, add to our collective understanding of why this is so important, and contribute to the next steps in building solutions to improving infrastructure for research data
Palestra, em inglês, "Publishing Data on the Web" sobre o documento Data on the Web Best Practices, apresentada na Semana de Metodologia NIC.br, em São Paulo, dia 12 de abril de 2016.
How DataCite and Crossref Support Research Data Sharing - Crossref LIVE HannoverCrossref
Britta Dreyer from DataCite presents on how DataCite and Crossref collaboratively support research data sharing. Presented at Crossref LIVE Hannover, June 27th 2018.
The global need to securely derive (instant) insights, have motivated data architectures from distributed storage, to data lakes, data warehouses and lake-houses. In this talk we describe Tag.bio, a next generation data mesh platform that embeds vital elements such as domain centricity/ownership, Data as Products, Self-serve architecture, with a federated computational layer. Tag.bio data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products for users to discover insights using FAIR Principles. Researchers can use its point and click (no-code) system to instantly perform analysis and share versioned, reproducible results. The platform combines a dynamic cohort builder with analysis protocols and applications (low-code) to drive complex analysis workflows. Applications within data products are fully customizable via R and Python plugins (pro-code), and the platform supports notebook based developer environments with individual workspaces.
Join us for a talk/demo session on Tag.bio data mesh platform and learn how major pharma industries and university health systems are using this technology to promote value based healthcare, precision healthcare, find cures for disease, and promote collaboration (without explicitly moving data around). The talk also outlines Tag.bio secure data exchange features for real world evidence datasets, privacy centric data products (confidential computing) as well as integration with cloud services
The webinar will be based on LODE-BD Recommendations - Linked Open Data (LOD)-enabled bibliographical data- which aims at providing bibliographic data providers of open repositories with a set of recommendations that will support the selection of appropriate encoding strategies for producing meaningful Linked Open Data (LOD)-enabled bibliographical data (LODE-BD).
Data on the Web Best Practices: Challenges and BenefitsCentro Web
Apresentação, em inglês, sobre o paper "Data on the Web Best Practices: Challenges and Benefits" apresentado no Open Data Research Symposium, em Madrid, dia 5 de outubro de 2016.
The first workshop of the series "Services to support FAIR data" took place in Prague during the EOSC-hub week (on April 12, 2019).
Speaker: Kostas Repanas (EC DG RTD)
Presentation delivered by Ludo Hendrickx and Joris Beek on 11 December 2013 Dutch at the Ministry of Interior, The Hague, The Netherlands. More information on: https://joinup.ec.europa.eu/community/ods/description
OpenAIRE guidelines and broker service for repository managers - OpenAIRE #OA...OpenAIRE
Presentation by Pedro Principe and Paolo Manghi at the OpenAIRE Open Access week webinar. Friday October 28, 2016. Webinar on Openaire compatibility guidelines and the dashboard for Repository Managers, with Pedro Principe (University of Minho) and Paolo Manghi (CNR/ISTI).
The three stages of Power BI Deployment PipelineservicesNitor
The deployment pipeline is an efficient tool for BI creators. Read our blog to discover details about the three stages of Power BI deployment pipeline.
Linking Media and Data using Apache Marmotta (LIME workshop keynote)LinkedTV
Sebastian Schaffert is CTO and co-founder of RedLink GmbH. He is also currently working as head of the "Knowledge and Media Technologies" department at Salzburg Research and occassionally as a lecturer at the University of Applied Sciences (FH) Salzburg. He received his diploma in Computer Science in 2001 and his PhD in 2004, both at the University of Munich, Germany. His current research focus is Semantic Web technologies, especially Linked Data, Semantic Search, Information Extraction, and Multimedia Information Systems.
Keynote at LIME workshop at ESWC 2014.
The slide deck for the Power Platform Presentation in SQL Saturday Redmond 2019. We have reviewed the power Platform Components, why is it better together and how to make it happen. During the demo all the options of implementation between the Power Apps and PowerBI were demonstrated. Including the data visualization changes with new data feed. Use some of the following ideas in your organization and POC's for more complex implementations.
Webinar Industrial Data Space Association: Introduction and ArchitectureThorsten Huelsmann
Industrial Data Space Association is an industry and user driven initiative to develop a global Industrial Data Space standard and reference architecture which provides data sovereignty. The work bases on use cases and supports certifiable software solutions and business models for the data economy. The Webinar by Lars Nagel and Sebastian Steinbuss gives and overview to the Industrial Data Space initiative and explains the Reference Architecture and ist main components.
Data citationworkshop idcc_2014 AltmanMicah Altman
Sound, reproducible scholarship rests upon a foundation of robust, accessible data. For this to be so in practice as well as theory, data must be accorded due importance in the practice of scholarship and in the enduring scholarly record. In other words, data should be considered legitimate, citable products of research.
A few days ago I was honored to officially announce the Data Citation Working Group's Joint Declaration of Data Citation Principles at IDCC 2014, from which the above quote is taken.
This Joint Data Citation Principles identifies guiding principles for the scholarly citation of data. This recommendation is a s collaborative work with CODATA, FORCE 11, DataCite and many other individuals and organizations. And in the week since it has been released, it has already garnered over twenty institutional endorsements.
Some slides introducing the principles are here:
[slideshare id=31957135&doc=datacitationworkshopidcc20142altmandraft-140305141032-phpapp01]
To summarize, from 1977 through 2009 there were three phases of development in the area of data citation.
The first phase of development focused on the role of citation to facilitate description and information retrieval. This phase introduced the principles that data in archives should be described as works rather than media, using author, title, and version.
The second phase of development extended citations to support data access and persistence. This phase introduced the principles that research data used by publication should be cited, that those citations should include persistent identifiers, and that the citations should be directly actionable on the web.
The third phase of development focused on using citations for verification and reproducibility. Although verification and reproducibility had always been one of the motivations for data archiving – it had not been a focus of citation practice. This phase introduced the principles that citations should support verifiable linkage of data and published claims, and it started the trend towards wider integration with the publishing ecosystem
And over the last five years the importance and urgency of scientific data management and access has been recognized more broadly. The culmination of this trend toward increasing recognition, thus far, is an increasingly widespread consensus by researchers and funders of research that data is a fundamental product of research and therefore a citable product. The fourth and current phase of data development work focuses on integration with the scholarly research and publishing ecosystem. This includes integration of data citation in standardized ways within publication, catalogs, tool chains, and larger systems of attribution.
Read the full recommendation here, along with examples, references and endorsements:
Joint Declaration of Data Citation Principles
Watch Alberto's presentation from Fast Data Strategy on-demand here: https://goo.gl/CRjYuD
In this session, we will review Denodo Platform 7.0 key capabilities.
Watch this session to learn more about:
• The vision behind the Denodo Platform
• The new data catalog and self-service features of Denodo Platform 7.0
• The new connectivity, data transformation, and enterprise-wide deployment features
Rumores, boatos e notícias falsas sempre existiram, mas a distribuição desse tipo de conteúdo foi potencializada com as aplicações que temos na Web, como as próprias redes sociais.
Na apresentação vemos Fake News também como um problema social e não somente um problema de ordem tecnológica.
Nesta palestra vamos aprender como utilizar web semântica na prática, conhecer os conceitos básicos de maneira simples e como utilizar vocabulários para publicação e uso de dados estruturados. Veremos também os benefícios ao utilizar web semântica em sites.
More Related Content
Similar to Melhores práticas para dados na Web - Workshop
The global need to securely derive (instant) insights, have motivated data architectures from distributed storage, to data lakes, data warehouses and lake-houses. In this talk we describe Tag.bio, a next generation data mesh platform that embeds vital elements such as domain centricity/ownership, Data as Products, Self-serve architecture, with a federated computational layer. Tag.bio data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products for users to discover insights using FAIR Principles. Researchers can use its point and click (no-code) system to instantly perform analysis and share versioned, reproducible results. The platform combines a dynamic cohort builder with analysis protocols and applications (low-code) to drive complex analysis workflows. Applications within data products are fully customizable via R and Python plugins (pro-code), and the platform supports notebook based developer environments with individual workspaces.
Join us for a talk/demo session on Tag.bio data mesh platform and learn how major pharma industries and university health systems are using this technology to promote value based healthcare, precision healthcare, find cures for disease, and promote collaboration (without explicitly moving data around). The talk also outlines Tag.bio secure data exchange features for real world evidence datasets, privacy centric data products (confidential computing) as well as integration with cloud services
The webinar will be based on LODE-BD Recommendations - Linked Open Data (LOD)-enabled bibliographical data- which aims at providing bibliographic data providers of open repositories with a set of recommendations that will support the selection of appropriate encoding strategies for producing meaningful Linked Open Data (LOD)-enabled bibliographical data (LODE-BD).
Data on the Web Best Practices: Challenges and BenefitsCentro Web
Apresentação, em inglês, sobre o paper "Data on the Web Best Practices: Challenges and Benefits" apresentado no Open Data Research Symposium, em Madrid, dia 5 de outubro de 2016.
The first workshop of the series "Services to support FAIR data" took place in Prague during the EOSC-hub week (on April 12, 2019).
Speaker: Kostas Repanas (EC DG RTD)
Presentation delivered by Ludo Hendrickx and Joris Beek on 11 December 2013 Dutch at the Ministry of Interior, The Hague, The Netherlands. More information on: https://joinup.ec.europa.eu/community/ods/description
OpenAIRE guidelines and broker service for repository managers - OpenAIRE #OA...OpenAIRE
Presentation by Pedro Principe and Paolo Manghi at the OpenAIRE Open Access week webinar. Friday October 28, 2016. Webinar on Openaire compatibility guidelines and the dashboard for Repository Managers, with Pedro Principe (University of Minho) and Paolo Manghi (CNR/ISTI).
The three stages of Power BI Deployment PipelineservicesNitor
The deployment pipeline is an efficient tool for BI creators. Read our blog to discover details about the three stages of Power BI deployment pipeline.
Linking Media and Data using Apache Marmotta (LIME workshop keynote)LinkedTV
Sebastian Schaffert is CTO and co-founder of RedLink GmbH. He is also currently working as head of the "Knowledge and Media Technologies" department at Salzburg Research and occassionally as a lecturer at the University of Applied Sciences (FH) Salzburg. He received his diploma in Computer Science in 2001 and his PhD in 2004, both at the University of Munich, Germany. His current research focus is Semantic Web technologies, especially Linked Data, Semantic Search, Information Extraction, and Multimedia Information Systems.
Keynote at LIME workshop at ESWC 2014.
The slide deck for the Power Platform Presentation in SQL Saturday Redmond 2019. We have reviewed the power Platform Components, why is it better together and how to make it happen. During the demo all the options of implementation between the Power Apps and PowerBI were demonstrated. Including the data visualization changes with new data feed. Use some of the following ideas in your organization and POC's for more complex implementations.
Webinar Industrial Data Space Association: Introduction and ArchitectureThorsten Huelsmann
Industrial Data Space Association is an industry and user driven initiative to develop a global Industrial Data Space standard and reference architecture which provides data sovereignty. The work bases on use cases and supports certifiable software solutions and business models for the data economy. The Webinar by Lars Nagel and Sebastian Steinbuss gives and overview to the Industrial Data Space initiative and explains the Reference Architecture and ist main components.
Data citationworkshop idcc_2014 AltmanMicah Altman
Sound, reproducible scholarship rests upon a foundation of robust, accessible data. For this to be so in practice as well as theory, data must be accorded due importance in the practice of scholarship and in the enduring scholarly record. In other words, data should be considered legitimate, citable products of research.
A few days ago I was honored to officially announce the Data Citation Working Group's Joint Declaration of Data Citation Principles at IDCC 2014, from which the above quote is taken.
This Joint Data Citation Principles identifies guiding principles for the scholarly citation of data. This recommendation is a s collaborative work with CODATA, FORCE 11, DataCite and many other individuals and organizations. And in the week since it has been released, it has already garnered over twenty institutional endorsements.
Some slides introducing the principles are here:
[slideshare id=31957135&doc=datacitationworkshopidcc20142altmandraft-140305141032-phpapp01]
To summarize, from 1977 through 2009 there were three phases of development in the area of data citation.
The first phase of development focused on the role of citation to facilitate description and information retrieval. This phase introduced the principles that data in archives should be described as works rather than media, using author, title, and version.
The second phase of development extended citations to support data access and persistence. This phase introduced the principles that research data used by publication should be cited, that those citations should include persistent identifiers, and that the citations should be directly actionable on the web.
The third phase of development focused on using citations for verification and reproducibility. Although verification and reproducibility had always been one of the motivations for data archiving – it had not been a focus of citation practice. This phase introduced the principles that citations should support verifiable linkage of data and published claims, and it started the trend towards wider integration with the publishing ecosystem
And over the last five years the importance and urgency of scientific data management and access has been recognized more broadly. The culmination of this trend toward increasing recognition, thus far, is an increasingly widespread consensus by researchers and funders of research that data is a fundamental product of research and therefore a citable product. The fourth and current phase of data development work focuses on integration with the scholarly research and publishing ecosystem. This includes integration of data citation in standardized ways within publication, catalogs, tool chains, and larger systems of attribution.
Read the full recommendation here, along with examples, references and endorsements:
Joint Declaration of Data Citation Principles
Watch Alberto's presentation from Fast Data Strategy on-demand here: https://goo.gl/CRjYuD
In this session, we will review Denodo Platform 7.0 key capabilities.
Watch this session to learn more about:
• The vision behind the Denodo Platform
• The new data catalog and self-service features of Denodo Platform 7.0
• The new connectivity, data transformation, and enterprise-wide deployment features
Similar to Melhores práticas para dados na Web - Workshop (20)
Rumores, boatos e notícias falsas sempre existiram, mas a distribuição desse tipo de conteúdo foi potencializada com as aplicações que temos na Web, como as próprias redes sociais.
Na apresentação vemos Fake News também como um problema social e não somente um problema de ordem tecnológica.
Nesta palestra vamos aprender como utilizar web semântica na prática, conhecer os conceitos básicos de maneira simples e como utilizar vocabulários para publicação e uso de dados estruturados. Veremos também os benefícios ao utilizar web semântica em sites.
Conteúdos com DRM na Web e a especificação EME do W3CNewton Calegari
Nesta palestra discutimos sobre os mecanismos de DRM na Web, quais navegadores implementam e quais os impactos nos usuários.
Apresentamos também a especificação Encrypted Media Extensions do W3C e como ela é utilizada para a reprodução de conteúdo protegido na Web.
Web Semântica: utilizando dados estruturados na práticaNewton Calegari
Nesta palestra mostro como utilizar Web Semântica na prática e como o Google tem utilizado em seus produtos.
Indico também dicas para uso de RDFa e JSON-LD
O histórico das CSS - Palestra no cssday{} - Maceió 2016Newton Calegari
Apresentação realizada no CSSday 2016, em Maceió, mostrando o histórico das especificações CSS, a atuação do grupo de trabalho e um convite para participação na construção de padrões junto ao W3C.
Web Semântica para desenvolvedores: RDFa, JSON-LD e schema.orgNewton Calegari
Slides da apresentação realizada no Rock and Code 2015, em Aracaju/SE. sobre Web Semântica, mostrando como os desenvolvedores web podem dar os primeiros passos para a publicação de dados estruturados na Web.
As tecnologias fundamentais da web - EGI 2015Newton Calegari
Apresentação sobre as tecnologias fundamentais para a criação da Web. Tecnologias estas que evoluíram para contribuir para a formação da Web tal como conhecemos hoje.
Slides apresentados na Escola de Governança da Internet no Brasil (http://egi.nic.br)
Documento de Iniciação Científica - Estudo utilizando big data, twitter e gephiNewton Calegari
"Análise e desenvolvimento de uma plataforma de big data para coleta de dados em tempo-real"
Projeto de análise e visualização de mensagens publicadas no Twitter durante os protestos ocorridos em Junho de 2013 em todo o Brasil.
Os dados analisados são mostrados utilizando a ferramenta de visualização Gephi.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
34. BP2: Provide descriptive metadata
Provide metadata that describes the overall
features of datasets and distributions.
Metadata
35. BP3: Provide structural metadata
Provide metadata that describes the schema and
internal structure of a distribution.
Metadata
36. BP4: Provide data license information
Provide a link to or copy of the license agreement
that controls use of the data.
Data Licenses
37. BP5: Provide data provenance
information
Provide complete information about the origins of
the data and any changes you have made.
Data Provenance
38. BP7: Provide a version indicator
Assign and indicate a version number or date for
each dataset.
Data Versioning
39. BP8: Provide version history
Provide a complete version history that explains
the changes made in each version.
Data Versioning
40. BP9: Use persistent URIs as identifiers
of datasets
Identify each dataset by a carefully chosen,
persistent URI.
Data Identifiers
41. BP10: Use persistent URIs as
identifiers within datasets
Reuse other people's URIs as identifiers within
datasets where possible.
Data Identifiers
42. BP11: Assign URIs to dataset versions
and series
Assign URIs to individual versions of datasets as
well as to the overall series.
Data Identifiers
43. BP12: Use machine-readable
standardized data formats
Make data available in a machine-readable,
standardized data format that is well suited to its
intended or potential use.
Data Formats
44. BP13: Use locale-neutral data
representations
Use locale-neutral data structures and values, or,
where that is not possible, provide metadata about
the locale used by data values.
Data Formats
45. BP14: Provide data in multiple formats
Make data available in multiple formats when more
than one format suits its intended or potential use.
Data Formats
46. BP17: Provide bulk download
Use terms from shared vocabularies, preferably
standardized ones, to encode data and metadata.
Data Access
47. BP19: Use content negotiation for
serving data available in multiple
formats
Use content negotiation in addition to file
extensions for serving data available in multiple
formats.
Data Access
48. BP21: Provide data up to date
Make data available in an up-to-date manner, and
make the update frequency explicit.
Data Access
49. BP23: Make data available through an
API
Offer an API to serve data if you have the
resources to do so.
Data Access APIs
50. BP24: Use Web Standards as the
foundation of APIs
When designing APIs, use an architectural style
that is founded on the technologies of the Web
itself.
Data Access APIs
51. BP25: Provide complete
documentation for your API
Provide complete information on the Web about
your API. Update documentation as you add
features or make changes.
Data Access APIs
52. BP29: Gather feedback from data
consumers
Provide a readily discoverable means for
consumers to offer feedback.
Feedback
53. BP30: Make feedback available
Make consumer feedback about datasets and
distributions publicly available.
Feedback
54. BP31: Enrich data by generating new
data
Enrich your data by generating new data when
doing so will enhance its value.
Data Enrichment
55. BP33: Provide Feedback to the
Original Publisher
Let the original publisher know when you are
reusing their data. If you find an error or have
suggestions or compliments, let them know.
Republication
56. BP34: Follow Licensing Terms
Find and follow the licensing requirements from the
original publisher of the dataset.
Republication
57. BP35: Cite the Original Publication
Acknowledge the source of your data in metadata.
If you provide a user interface, include the citation
visibly in the interface.
Republication