The document provides guidance on early planning for data management, including becoming familiar with funder requirements, planning for the types and formats of data that will be created, designing a system for taking notes, organizing files through consistent naming schemes and use of folders, adding metadata to files to aid in documentation and discovery, and using RSS feeds to organize web-based information. It also touches on issues like plagiarism, data protection, intellectual property rights, and remote access to and backup of data.
Scholars and researchers are being asked by an increasing number of research sponsors and journals to outline how they will manage and share their research data. This is an introduction to data management and sharing practices with some specific information for Columbia University researchers.
This presentation covers a number of best practices for managing research data. The main topics include: file naming and organization conventions, data documentation, and data storage and backups.
This document discusses the importance of managing data effectively and having clean, accurate data. It notes that dirty or unmanaged data can become unreliable very quickly as errors accumulate. It provides examples of what constitutes dirty data like duplicates, missing or inaccurate information. The document recommends having processes and standards for entering data to maintain data quality. These include reconciling donations, using batch numbers, and regular audits to identify dirty data issues. The overall message is that non-profits must treat data as a valuable asset and implement strategies to organize, maintain and protect their data.
Making the Move to an Enterprise Clinical Trial Management SystemPerficient
The document discusses making the move to an enterprise clinical trial management system (CTMS) for organizations of any size. It outlines key indicators that a CTMS is needed, such as rapid growth, increased trial complexity, and a desire for real-time data integration. An internal analysis of current processes and identification of stakeholders and requirements is recommended. Selection considerations include system performance, customization options, and integration capabilities. The conclusion emphasizes analyzing needs, obtaining funding approval, and choosing a system and implementation partner carefully.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns, like in a database. Unstructured data lacks a predefined structure or organization, like text documents, images, and videos. Semi-structured data has some structure but not a rigid schema, like XML files. The majority of organizational data is unstructured. Big data is also discussed, which is high-volume, high-velocity, and high-variety data that requires new technologies to capture, store, manage and analyze.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns like in databases and can be easily processed by computers. Unstructured data lacks a predefined structure or organization and makes up about 80% of organizational data. Semi-structured data has some structure but does not conform fully to predefined data models. The document also discusses big data in terms of its volume, velocity and variety characteristics as well as challenges in capturing, storing, managing and analyzing big data.
Scholars and researchers are being asked by an increasing number of research sponsors and journals to outline how they will manage and share their research data. This is an introduction to data management and sharing practices with some specific information for Columbia University researchers.
This presentation covers a number of best practices for managing research data. The main topics include: file naming and organization conventions, data documentation, and data storage and backups.
This document discusses the importance of managing data effectively and having clean, accurate data. It notes that dirty or unmanaged data can become unreliable very quickly as errors accumulate. It provides examples of what constitutes dirty data like duplicates, missing or inaccurate information. The document recommends having processes and standards for entering data to maintain data quality. These include reconciling donations, using batch numbers, and regular audits to identify dirty data issues. The overall message is that non-profits must treat data as a valuable asset and implement strategies to organize, maintain and protect their data.
Making the Move to an Enterprise Clinical Trial Management SystemPerficient
The document discusses making the move to an enterprise clinical trial management system (CTMS) for organizations of any size. It outlines key indicators that a CTMS is needed, such as rapid growth, increased trial complexity, and a desire for real-time data integration. An internal analysis of current processes and identification of stakeholders and requirements is recommended. Selection considerations include system performance, customization options, and integration capabilities. The conclusion emphasizes analyzing needs, obtaining funding approval, and choosing a system and implementation partner carefully.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns, like in a database. Unstructured data lacks a predefined structure or organization, like text documents, images, and videos. Semi-structured data has some structure but not a rigid schema, like XML files. The majority of organizational data is unstructured. Big data is also discussed, which is high-volume, high-velocity, and high-variety data that requires new technologies to capture, store, manage and analyze.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns like in databases and can be easily processed by computers. Unstructured data lacks a predefined structure or organization and makes up about 80% of organizational data. Semi-structured data has some structure but does not conform fully to predefined data models. The document also discusses big data in terms of its volume, velocity and variety characteristics as well as challenges in capturing, storing, managing and analyzing big data.
Big data analytics (BDA) involves examining large, diverse datasets to uncover hidden patterns, correlations, trends, and insights. BDA helps organizations gain a competitive advantage by extracting insights from data to make faster, more informed decisions. It supports a 360-degree view of customers by analyzing both structured and unstructured data sources like clickstream data. Businesses can leverage techniques like machine learning, predictive analytics, and natural language processing on existing and new data sources. BDA requires close collaboration between IT, business users, and data scientists to process and analyze large datasets beyond typical storage and processing capabilities.
The document discusses a Faculty Development Program (FDP) on database management systems that was held on December 6, 2018 at the University College of Engineering Tindivanam in Tindivanam, India. The FDP covered recent research perspectives in different database management systems and the importance of database management systems in Digital India. It was conducted by Dr. A. Karthirvel, Professor and Head of the Computer Science and Engineering Department at MNM Jain Engineering College in Chennai.
This document summarizes a course on data archiving and processing. The course covers archiving theory and practices, data structures and processing, survey documentation, and user guides. It discusses different types of survey designs including cross-sectional, panel, cohort, and retrospective designs. It provides examples of deriving variables, creating analytic files, data linkage for hierarchical and longitudinal data, and exercises for participants. The intended outcomes are understanding the need to archive data, differentiating data types and structures, using software to process and analyze data, and appreciating user guides.
S. Venkataraman (DCC) talks about the basics of Research Data Management and how to apply this when creating or reviewing a Data Management Plan (DMP). He discusses data formats and metadata standards, persistent identifiers, licensing, controlled vocabularies and data repositories.
link to : dcc.ac.uk/resources
1. The document discusses best practices for managing research data over the data life cycle, from collection through sharing and archiving. It provides tips for organizing, documenting, and storing data in sustainable file formats and naming conventions. Following best practices helps ensure usability, reproducibility, and long-term access to research data.
2. Specific best practices covered include using consistent organization, standardized naming and formats, descriptive filenames, quality assurance, scripting for processing, documenting file contents, and choosing open file formats. The document also addresses data security, backup, and storage considerations.
3. Managing data properly is important for reuse and sharing data with others now or in the future. Scripting helps capture data workflows for reproducibility.
1. The document provides an overview of key concepts in data science and machine learning including the data science process, types of data, machine learning techniques, and Python tools used for machine learning.
2. It describes the typical 6 step data science process: setting goals, data retrieval, data preparation, exploration, modeling, and presentation.
3. Different types of data are discussed including structured, unstructured, machine-generated, graph-based, and audio/video data.
4. Machine learning techniques can be supervised, unsupervised, or semi-supervised depending on whether labeled data is used.
The document is a chapter from a textbook on data mining written by Akannsha A. Totewar, a professor at YCCE in Nagpur, India. It provides an introduction to data mining, including definitions of data mining, the motivation and evolution of the field, common data mining tasks, and major issues in data mining such as methodology, performance, and privacy.
BioStat International has developed a new web-based electronic data capture software called rED Cap for managing clinical research trials. rED Cap was designed to fill a niche for an affordable and easy-to-use data management solution. It provides features like customized eCRF design, data review, training and hosting. rED Cap aims to offer a cost-effective and user-friendly system with flexible reporting and support capabilities for various types of clinical studies and data collection needs. BioStat International invites potential clients to contact them for more information or a quote on using rED Cap for their research projects.
Most of the time, when you hear about Artificial Intelligence (AI), people talk about new algorithms or even the computation power needed to train them. But Data is one of the most important factors in AI.
This document provides an introduction to data mining and data warehousing. It discusses how the volume of data being collected is growing exponentially in many fields due to advances in data collection technologies. It also describes how data mining can be used to extract useful knowledge and patterns from large datasets to help solve important problems. The document outlines some key techniques in data mining including classification, clustering, and association rule mining. It discusses how data mining draws from fields like machine learning, statistics, and databases to analyze large and complex datasets.
This talk was given by Brianna Marshall, Digital Curation Coordinator, at the UW-Madison Digital Humanities Research Network meeting on December 2, 2014.
Introduction of Data Science and Data AnalyticsVrushaliSolanke
Data science involves extracting meaningful insights from raw and structured data using scientific methods, technologies, and algorithms. It is a multidisciplinary field that uses tools to manipulate and analyze large amounts of data to find new and useful information. Data science uses powerful hardware, programming, and efficient algorithms to solve data problems and is the future of artificial intelligence. It involves collecting, preparing, analyzing, visualizing, managing, and preserving large data sets. Examples of data science applications include smart watches and Tesla's use of deep learning for self-driving cars.
This document provides information about a course on big data analytics. It outlines the course objectives, which are to explain the need for big data analytics, develop the ability to analyze and process big data, and build skills to write MapReduce programs for analyzing big data problems. The course outcomes are to identify the need for big data analysis, analyze and identify appropriate big data processing technology, and write MapReduce programs to process big data using case studies. The document also defines different types of digital data like structured, unstructured and semi-structured data.
This document provides an introduction to data mining. It discusses the evolution of data mining technology, defines what data mining is, and outlines common data mining tasks like classification, clustering, and association rule discovery. The document also examines the KDD process, different types of data that can be mined, and major issues in data mining like scalability, handling diverse data types, and integrating discovered knowledge.
The document discusses requirements for National Science Foundation (NSF) Data Management Plans (DMPs). Starting in 2011, DMPs describing how research data will be organized, preserved, and shared are required as part of NSF grant proposals. DMPs must address data standards, access and sharing policies, and long-term preservation and access. Resources for writing DMPs are provided, including tools, best practices examples, and experts available for consultation.
The document provides an overview of the data mining concepts and techniques course offered at the University of Illinois at Urbana-Champaign. It discusses the motivation for data mining due to abundant data collection and the need for knowledge discovery. It also describes common data mining functionalities like classification, clustering, association rule mining and the most popular algorithms used.
This document provides guidance on managing research data. It discusses planning ahead to consider data needs, formats, and volume. It emphasizes organizing data through file naming, metadata, references, email, and remote access. It stresses preserving data by determining what to keep/delete, using long-term storage such as repositories or archives. Finally, it examines reasons to share data such as scientific integrity, funding mandates, and increasing impact and collaboration.
This document provides guidance on managing research data. It discusses planning ahead by considering data needs, formats, volume and ethics. It also covers organizing data through file naming, metadata, references, remote access and safekeeping. Preserving data involves determining what to keep/delete and using long-term storage such as repositories. Reasons for sharing data include scientific integrity, funding mandates and increasing impact, while reasons for not sharing include financial or sensitive personal information.
Big data analytics (BDA) involves examining large, diverse datasets to uncover hidden patterns, correlations, trends, and insights. BDA helps organizations gain a competitive advantage by extracting insights from data to make faster, more informed decisions. It supports a 360-degree view of customers by analyzing both structured and unstructured data sources like clickstream data. Businesses can leverage techniques like machine learning, predictive analytics, and natural language processing on existing and new data sources. BDA requires close collaboration between IT, business users, and data scientists to process and analyze large datasets beyond typical storage and processing capabilities.
The document discusses a Faculty Development Program (FDP) on database management systems that was held on December 6, 2018 at the University College of Engineering Tindivanam in Tindivanam, India. The FDP covered recent research perspectives in different database management systems and the importance of database management systems in Digital India. It was conducted by Dr. A. Karthirvel, Professor and Head of the Computer Science and Engineering Department at MNM Jain Engineering College in Chennai.
This document summarizes a course on data archiving and processing. The course covers archiving theory and practices, data structures and processing, survey documentation, and user guides. It discusses different types of survey designs including cross-sectional, panel, cohort, and retrospective designs. It provides examples of deriving variables, creating analytic files, data linkage for hierarchical and longitudinal data, and exercises for participants. The intended outcomes are understanding the need to archive data, differentiating data types and structures, using software to process and analyze data, and appreciating user guides.
S. Venkataraman (DCC) talks about the basics of Research Data Management and how to apply this when creating or reviewing a Data Management Plan (DMP). He discusses data formats and metadata standards, persistent identifiers, licensing, controlled vocabularies and data repositories.
link to : dcc.ac.uk/resources
1. The document discusses best practices for managing research data over the data life cycle, from collection through sharing and archiving. It provides tips for organizing, documenting, and storing data in sustainable file formats and naming conventions. Following best practices helps ensure usability, reproducibility, and long-term access to research data.
2. Specific best practices covered include using consistent organization, standardized naming and formats, descriptive filenames, quality assurance, scripting for processing, documenting file contents, and choosing open file formats. The document also addresses data security, backup, and storage considerations.
3. Managing data properly is important for reuse and sharing data with others now or in the future. Scripting helps capture data workflows for reproducibility.
1. The document provides an overview of key concepts in data science and machine learning including the data science process, types of data, machine learning techniques, and Python tools used for machine learning.
2. It describes the typical 6 step data science process: setting goals, data retrieval, data preparation, exploration, modeling, and presentation.
3. Different types of data are discussed including structured, unstructured, machine-generated, graph-based, and audio/video data.
4. Machine learning techniques can be supervised, unsupervised, or semi-supervised depending on whether labeled data is used.
The document is a chapter from a textbook on data mining written by Akannsha A. Totewar, a professor at YCCE in Nagpur, India. It provides an introduction to data mining, including definitions of data mining, the motivation and evolution of the field, common data mining tasks, and major issues in data mining such as methodology, performance, and privacy.
BioStat International has developed a new web-based electronic data capture software called rED Cap for managing clinical research trials. rED Cap was designed to fill a niche for an affordable and easy-to-use data management solution. It provides features like customized eCRF design, data review, training and hosting. rED Cap aims to offer a cost-effective and user-friendly system with flexible reporting and support capabilities for various types of clinical studies and data collection needs. BioStat International invites potential clients to contact them for more information or a quote on using rED Cap for their research projects.
Most of the time, when you hear about Artificial Intelligence (AI), people talk about new algorithms or even the computation power needed to train them. But Data is one of the most important factors in AI.
This document provides an introduction to data mining and data warehousing. It discusses how the volume of data being collected is growing exponentially in many fields due to advances in data collection technologies. It also describes how data mining can be used to extract useful knowledge and patterns from large datasets to help solve important problems. The document outlines some key techniques in data mining including classification, clustering, and association rule mining. It discusses how data mining draws from fields like machine learning, statistics, and databases to analyze large and complex datasets.
This talk was given by Brianna Marshall, Digital Curation Coordinator, at the UW-Madison Digital Humanities Research Network meeting on December 2, 2014.
Introduction of Data Science and Data AnalyticsVrushaliSolanke
Data science involves extracting meaningful insights from raw and structured data using scientific methods, technologies, and algorithms. It is a multidisciplinary field that uses tools to manipulate and analyze large amounts of data to find new and useful information. Data science uses powerful hardware, programming, and efficient algorithms to solve data problems and is the future of artificial intelligence. It involves collecting, preparing, analyzing, visualizing, managing, and preserving large data sets. Examples of data science applications include smart watches and Tesla's use of deep learning for self-driving cars.
This document provides information about a course on big data analytics. It outlines the course objectives, which are to explain the need for big data analytics, develop the ability to analyze and process big data, and build skills to write MapReduce programs for analyzing big data problems. The course outcomes are to identify the need for big data analysis, analyze and identify appropriate big data processing technology, and write MapReduce programs to process big data using case studies. The document also defines different types of digital data like structured, unstructured and semi-structured data.
This document provides an introduction to data mining. It discusses the evolution of data mining technology, defines what data mining is, and outlines common data mining tasks like classification, clustering, and association rule discovery. The document also examines the KDD process, different types of data that can be mined, and major issues in data mining like scalability, handling diverse data types, and integrating discovered knowledge.
The document discusses requirements for National Science Foundation (NSF) Data Management Plans (DMPs). Starting in 2011, DMPs describing how research data will be organized, preserved, and shared are required as part of NSF grant proposals. DMPs must address data standards, access and sharing policies, and long-term preservation and access. Resources for writing DMPs are provided, including tools, best practices examples, and experts available for consultation.
The document provides an overview of the data mining concepts and techniques course offered at the University of Illinois at Urbana-Champaign. It discusses the motivation for data mining due to abundant data collection and the need for knowledge discovery. It also describes common data mining functionalities like classification, clustering, association rule mining and the most popular algorithms used.
This document provides guidance on managing research data. It discusses planning ahead to consider data needs, formats, and volume. It emphasizes organizing data through file naming, metadata, references, email, and remote access. It stresses preserving data by determining what to keep/delete, using long-term storage such as repositories or archives. Finally, it examines reasons to share data such as scientific integrity, funding mandates, and increasing impact and collaboration.
This document provides guidance on managing research data. It discusses planning ahead by considering data needs, formats, volume and ethics. It also covers organizing data through file naming, metadata, references, remote access and safekeeping. Preserving data involves determining what to keep/delete and using long-term storage such as repositories. Reasons for sharing data include scientific integrity, funding mandates and increasing impact, while reasons for not sharing include financial or sensitive personal information.
http://kulibrarians.g.hatena.ne.jp/kulibrarians/20170222
Presentation by Cuna Ekmekcioglu (The University of Edinburgh)
- Creating and Managing Digital Research Data in Creative Arts: An overview (2016)
CC BY-NC-SA 4.0
The document discusses requirements for data management plans from the National Science Foundation. It notes that as of January 2011, NSF will require a data management plan for all new grant proposals as well as existing grants. The plan must address what data will be collected and how it will be organized, preserved, shared, and accessed. It emphasizes the importance of effective data management for facilitating research by both the principal investigators and other researchers. The document provides guidance on developing a data management plan that meets NSF's criteria and effectively manages research data.
This document provides an introduction to data management. It discusses why data management is important, covering key aspects like developing data management plans, file organization, documentation and metadata, storage and backup, legal and ethical considerations, sharing and reuse, and preservation. Effective data management is critical for research success as it supports reproducibility, sharing, and preventing data loss. The document outlines best practices and resources like the library that can help with developing strong data management strategies.
Research Data (and Software) Management at Imperial: (Everything you need to ...Sarah Anna Stewart
A presentation on research data management tools, workflows and best practices at Imperial College London with a focus on software management. Presented at the 2017 session of the HPC Summer School (Dept. of Computing).
Presentation from a University of York Library workshop on research data management. The workshop provides an introduction to research data management, covering best practice for the successful organisation, storage, documentation, archiving, and sharing of research data.
The document discusses NSF requirements for data management plans for grant proposals. It notes that as of January 2011, proposals must include a data management plan that addresses how data will be organized, preserved, and shared. The plan must provide enough detail for reviewers to understand how data will be managed during and after the project. Guidelines are provided on the key elements to address in a data management plan, including what data will be collected, how it will be formatted and documented, how others can access and use the data, and how the data will be preserved long-term. Resources for developing effective data management plans are suggested.
PIDs, Data and Software: How Libraries Can Support Researchers in an Evolving...Sarah Anna Stewart
Presentation given at the M25 Consortium of Academic Libraries, CPD25 Event on 'The Role of the Library in Supporting Research'. Provides an introduction to data, software and PIDs and a brief look at how libraries can enable researchers to gain impact and credit for their research data and software.
The state of global research data initiatives: observations from a life on th...Projeto RCAAP
The document discusses research data management and provides guidance on best practices. It defines research data management as the active management of data over its lifecycle. It recommends writing a data management plan to document how data will be created, stored, shared, and preserved. It also provides tips for making data accessible and reusable through use of metadata standards, documentation, open licensing, and depositing data in repositories with persistent identifiers. The goal is to help researchers manage and share their data effectively to increase access and reuse.
Brad Houston presented information on data management plans (DMPs) required by the National Science Foundation (NSF) for grant proposals. He explained that DMPs must describe the data to be collected or generated, how it will be organized and formatted, and how it will be preserved and shared. He emphasized using open standards and preparing metadata to help others understand and find the data. Researchers were advised to consider long-term preservation and to partner with libraries or repositories to ensure access over time. Contact information was provided for those needing assistance developing their DMP.
This document summarizes a seminar on data management for undergraduate researchers. It discusses what data is, why it needs to be managed, and key aspects of the data management process such as data organization, metadata, storage, and archiving. Topics covered include file naming best practices, version control, documentation, metadata standards, storage options, and long-term archiving. The goal is to help researchers organize and document their data so it can be understood, preserved, and reused.
This document discusses research data management (RDM). It defines research data and describes the RDM lifecycle. Key aspects of RDM include creating data management plans, documenting and organizing data, and ensuring long-term preservation and sharing of data. The document outlines best practices for RDM, such as using appropriate file formats and metadata standards. It also discusses challenges around sensitive data and guidelines for data sharing and citation. The roles libraries can play in supporting RDM are identified, such as developing RDM policies, training researchers, and setting up data repositories.
- The document summarizes a workshop on research data management given by Stephanie Simms from the California Digital Library.
- It discusses an overview of research data management and the "SupportYour Data" program, which aims to help researchers better organize, save, document, and share the outputs of their work.
- The workshop covered assessing current data management practices, accessing tools and resources, and data-related services available at Kyoto University.
Getting to grips with research data management Wendy Mears
This document provides an overview of research data management. It defines research data management and discusses its importance. It also outlines the data lifecycle model and provides guidance on sharing data, working with data, planning for data management, and useful resources for research data management. The document aims to help researchers effectively manage the data created throughout the research process.
Data Management for Research (New Faculty Orientation)aaroncollie
Situates research data management as a contingency that should be addressed and provisioned for during planning and research design. Draws out fundamental practices for file management, data description, and enumerates storage decision points.
This document provides an overview of open access, including:
- Defining open access as digital literature that is free to read, distribute, and use without restrictions.
- Describing the open access movement to make scholarly literature openly accessible online at no cost.
- Explaining how open access has emerged due to factors like growing information and the need for access, as well as budget cuts straining library resources.
- Detailing benefits of open access for authors, institutions, and society, such as increased visibility, citation rates, and efficient use of public funding.
The document provides an overview of the electronic resources available through the Library and Science College (LSC) Library. It details the library catalog which provides access to physical and ebook collections. It also explains how to access and search ebook databases like ebrary, Dawsonera, and MyiLibrary. Finally, it outlines the EBSCOhost and ProQuest research databases available through the library and how to conduct searches and save materials. The document is intended help students navigate and make use of the various electronic resources subscribed to by the LSC Library.
The document provides guidance on conducting a literature search and review. It outlines the main objectives of a literature search as identifying as many relevant published and unpublished sources as possible on a specific topic. It then describes the key stages of a literature search and review process, including determining information needs, exploring available sources, reading and annotating sources, taking notes, analyzing findings, and writing up the results. A variety of source types and search techniques are also discussed to aid in locating relevant literature.
This document provides an overview of referencing and avoiding plagiarism. It defines referencing as acknowledging the intellectual work of others and discusses the differences between reference lists and bibliographies. It also defines plagiarism, provides examples, and discusses consequences. Additionally, it covers topics such as criteria for choosing references, principles of referencing, what to reference, and reference styles like Harvard style. Finally, it discusses using reference management software like Mendeley and Zotero to simplify the referencing process.
The document provides information about referencing and citation styles. It discusses what referencing is, the difference between a reference list and bibliography, examples of in-text citations, criteria for choosing sources to reference, examples of plagiarism, and descriptions of the IEEE, Harvard, and Vancouver citation styles including how to format in-text citations and bibliographic references.
This document discusses how to critically evaluate information sources. It begins by explaining the importance of evaluation in becoming an effective researcher and producing high-quality work. Several criteria for evaluation are then outlined, including authority, accuracy, currency, relevance, objectivity, coverage, and methodology. Specific tips are provided for evaluating websites and images. The document cautions that misinformation and disinformation exist and should be avoided. Overall, the document aims to teach readers how to thoughtfully assess information sources using measurable standards.
Regulamento de Empréstimo dos SDUA 2007-2008Graça Gabriel
Este documento descreve as regras e regulamentos das bibliotecas da Universidade de Aveiro, incluindo informações sobre os tipos de utilizadores, seus direitos e deveres, serviços disponíveis como empréstimos, salas de estudo e acesso à internet, horários de funcionamento e procedimentos de empréstimo.
[4] SBIDM: comunicacao assíncrona, síncrona e multidireccionalGraça Gabriel
O documento descreve várias iniciativas de comunicação multidirecional realizadas por bibliotecas universitárias, incluindo o lançamento de livros com a utilização de blogs, fotos e vídeos online, uma campanha para reduzir barulho com comentários online de utilizadores, e uma exposição documental com parcerias que teve visitas presenciais e comentários online.
[2] SBIDM: comunicacao assíncrona, síncrona e multidireccionalGraça Gabriel
PowerPoint apresentado no 10º Congresso Bad onde de pretende dar a conhecer a importância da comunicação no contexto das bibliotecas, focando o caso dos Serviços de Bibliotecas, Informação Documental e Museologia (SBIDM).
[1] SBIDM: comunicação assíncrona, síncrona e multidireccionalGraça Gabriel
O documento discute as novas formas de comunicação síncrona, assíncrona e multidirecional na era digital, como a internet e as redes sociais permitiram a comunicação e compartilhamento de informação entre pessoas em qualquer lugar. Também destaca os novos desafios para as bibliotecas de desenvolver serviços e ferramentas online para apoiar os usuários nesta nova dinâmica de produção, recepção e difusão da informação.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
3. What is data?
The Royal Society. (2012). Science as an open enterprise. Available at
www.oecd.org/sti/sci-tech/38500813.pdf (retrieved 6 January 2014).
4. “’research data’ are defined as factual records
(numerical scores, textual records, images and
sounds) used as primary sources for scientific
research, and that are commonly accepted in the
scientific community as necessary to validate research
findings. A research data set constitutes a systematic,
partial representation of the subject being
investigated.” (OECD, 2007, p.13)
OECD. (2007). OECD Principles and guidelines for access to research from public funding.
Available at www.oecd.org/sti/sci-tech/38500813.pdf (retrieved 18 October 2013).
What is research data?
5. Digital universe
EMC. (2012). The digital
universe in 2020: big data,
bigger digital shadows,
and biggest growth in the
Far East. Available at
http://www.emc.com/leadershi
(retrieved 14 January
2014).
7. Data types
The Royal Society. (2012). Science as an open enterprise. Available at
www.oecd.org/sti/sci-tech/38500813.pdf (retrieved 6 January 2014).
9. Do you know what your funders
expect of your research?
What plans have you made for you
research data?
What type of note-taking have you
designed?
10. Early planning 〉 1. Funding bids requirements
Become familiar with what funders expect in terms of:
•Managing generated data (how you will document and maintain the
research you produce);
•Publishing results (how/where to publish);
•Sharing outputs (open access types);
•Depositing and preserving outputs (how you will ensure your data is
accessible in the long term, such as depositing papers in a repository or
using a recommended data centre for safekeeping).
Help provided by:
Department/group computing officer(s)
Cambridge Research Office
DSpace@Cambridge support staff
Librarians
11. Early planning 〉 2. Data planning
Plan ahead for your data management needs:
•Type of data created
• Consider what data will be created (e.g. interview data and
transcripts, experimental measurements, high resolution
imaging…);
• Consider how data will be created/captured (e.g. recorded, printed,
made available in a website/intranet);
• Consider the equipment/software required (Find out if there is
funding in case new software is needed).
12. Early planning 〉 2. Data planning
Plan ahead for your data management needs:
•Choose what data format(s)
• What discipline-specific norms already exist;
• What software/formats you or colleagues have used in past
projects, and which will be easiest to share with others (e.g.
Microsoft Excel for recording data, SPSS for analysis);
• What formats will be easiest to annotate with metadata;
• What formats are at risk of obsolescence;
• What software is compatible with hardware you already have.
13. Early planning 〉 2. Data planning
Plan ahead for your data management needs:
•Volume of data created
• Consider where data is going to be stored;
• Consider if the scale of data poses challenges when sharing/
transferring data.
•Plan how to sort and analyse data;
•Investigate about Intellectual property rights (IPR) concerning your
research and its dissemination, future related research projects, and
associated profit or credit.
14. Early planning 〉 2. Data planning
Plan ahead for your data management needs:
•Investigate about data protection and ethics
According to the Data Protection Act 1998 (governs the processing of
personal data), information must follow eight data protection principles:
• processed fairly and lawfully
• obtained for specified and lawful purposes
• adequate, relevant and not excessive
• accurate and, where necessary, kept up-to-date
• not kept for longer than necessary
• processed in accordance with the subject's rights
• kept secure
• not transferred abroad without adequate protection
16. Early planning 〉 4. Note-taking
Design a reading grid to take notes of the main ideas/data/research
(including specific citations that you may want to use later on).
•Quivy and Campenhoudt
Quivy, R.; Campenhoudt, L. (2008). Manual de investigação em ciências sociais (5 ed.).
Lisboa: Gradiva.
Main ideas/content Evaluation of
ideas/content
1. e.g. Theory A considers… (pages x-x) e.g. Different
theories;
Take further
research on those
supporting theory
x and theory y;
2. e.g. Theory B considers…
3. e.g. Theory C…
17. Early planning 〉 4. Note-taking
• The Cornell Method
Pauk, W. (1993). How to study in college (5th
ed.). Boston: Houghton Mifflin Co.
Major themes Detailed points
1st
main point
e.g. There are several types of theories
More detailed information.
e.g. Theory A explains…
More detailed information.
e.g. Theory B explains…
e.g. Theory C explains…
2nd
main point
e.g. Why do some believe in theory A
e.g. Reason 1…
e.g. Reason 2…
critical evaluation
e.g. Both theories A and B do not explain the occurrence of xxx.
18. Early planning
Further information
Cambridge University Intellectual Property Rights Regulations
DSpace@Cambridge IPR page
JISC Legal IPR page
DPA 1998: advice for Cambridge staff and students
University page about the Data Protection Act 1998
UK Data Archive Duty of confidentially
The Information Commissioners's Office Guide to data protection
JISC Legal Guide to data protection
Contact the Protection Office: data.protection@admin.cam.ac.uk
University self-taught courses:
Data Protection Training for Academic Staff ;
Data Protection Training for Administrators
20. How do you organise your files?
How do you name your files?
Do you create metadata to help describe your
data?
Do you manage your emails?
How do your organize your bibliographic
references?
Do you have remote access to your data?
21. • Adhere to existing procedures (within your research group,
Department or preferred by your supervisor);
• Use folders and subfolders
• Name folders appropriately (e.g. after the areas of work and not after
individual researchers or students);
• Be consistent with a naming scheme;
• Structure folders hierarchically (limited number of folders for the
broader topics, and more specific folders within these);
• Separate on-going and completed work;
• Be consistent with filenames
• Choose a standard vocabulary: use a revision numbering system
(e.g. xxxx_v01.doc; 1930film0001.tif); specify the amount of digits to
use (standard: eight-character limit);
Organize your data 〉 1. Naming and organizing files
22. • Be consistent with filenames
• Decide on the use of dates so that documents are displayed
chronologically;
• Include a version control table for important documents;
• Avoid characters such as / : * ? < > | (because they are reserved for
the operating system) and spaces (use hyphens or underscores
particularly with files destined for the Web);
• When drafts are circulating, decide how to identify individuals (e.g.
xxxx_gdcf2_v01.doc);
• Mark the final document as “Final” and prevent further changes.
• Review records (assess materials regularly or at the end of a project to
ensure files aren’t kept needlessly);
• Backup your files/data/favourites.
Organize your data 〉 1. Naming and organizing files
23. • Use metadata (data about data -
usually embedded in the data
files/documents themselves) to add
information to your documents (e.g.
use Microsoft Office’s “Document
properties”).
• Create both study-level information
about the research and data
creation, as well as descriptions
and annotations at the variable,
data item or data file level;
• Provide searchable information to
help you/others find information.
Organize your data 〉 2. Documentation and metadata
24. • Standard metadata fields:
• Title (Name of the dataset or research project);
• Creator (organization or people who created the data);
• Identifier (number used to identify the data);
• Subject(s) (keywords describing the subject or content of the data);
• Funders;
• Rights (known intellectual property rights held for the data);
• Access information (where/how data can be accessed by others);
• Significant dates (project start and end date; release date; data
lifespan; update schedule);
• Methodology (how the data was generated);
• Code lists (explanation of codes or abbreviations used);
• Versions (date/time stamp for each file);
• List of file names (list of all data files associated with the project).
Organize your data 〉 2. Documentation and metadata
25. Organize your data 〉 2. Documentation and metadata
Further information
Data Documentation Initiative
UK Data Archive: Documenting your data
MIT Libraries Documentation and metadata
Library of Congress Authorities
JISC Digital Media Approaches to describing images
Help provided by Dspace@Cambridge:
support@repository.cam.ac.uk
27. Organize your data 〉 4. Manage your email
• Structure your folders by subject, activity or project;
• Set up a separate folder for personal emails (create
filters so they go directly here);
• Archive old emails (even if it’s in an "Archive" folder);
• Delete useless emails and block junk email;
• Limit the use of attachments (use alternative ‘data
sharing’ options) but, if important, save them.
• Try applications to help you manage your email (see “
7 great services for taking back control of your inbox”)
28. • Keep track of every
bibliographic reference
used/seen;
• Use a reference management
software;
• Backup your bibliographic data.
Organize your data 〉 5. Managing references
Further information
University Library webpage about
Mendeley, Zotero and EndNote
30. • Departmental/college Virtual Private Network (VPN)
See the University Computing Service Info sheet
• Desktop Services Account
See the University Computing Service
Introduction to Desktop Services
• Research group’s CamTools site (Moodle in the future)
See CamTools site
CamTools Helpdesk camtoolshelp@admin.cam.ac.uk
• Online services that provide storage (e.g. DropBox)
• Online/desktop programs to storage and keep track of the
changes made to documents (e.g. Git)
Organize your data 〉 6. Remote access
31. • Key printed data should be kept in a secure location where access
can be restricted to authorised personnel or in locked cupboards;
• Keep our sensitive electronic data password protected, encrypted
or sett privileged levels of access (including backups);
• Do not use printouts with sensitive data as scrap paper. Chose
efficient methods of disposing (like shredding);
• Computer terminals should not be left unattended and should be
logged off at the end of each session;
• Protect your computer with anti-virus, firewall and anti-keylogging;
Organize your data 〉 7. Keep your data safe
32. • Choose strong passwords (use a mix of upper and lower case letters
and digits/punctuation characters)
• If you store passwords on a computer system, encrypt the file;
• Never give your password to other people;
• Frequently change passwords.
Organize your data 〉 7. Keep your data safe
Further information
University Computing
Service
Password? What password?
CUED
Departmental policy on data pr
• Store crucial data in more than one secure
location
• Networked drives;
• Personal computers/laptops;
• External storage devices (CDs, DVDs, USB
flash drives);
• Remote or online systems for storing
(Dropbox, Mozy, A-Drive, etc.).
34. Jones, S. (2011). How to Develop a Data Management and
Sharing Plan. Edinburgh: Digital Curation Centre. Available at:
http://www.dcc.ac.uk/resources/how-guides/develop-data-plan#s
(retrieved 17 February 2014).
Further information
36. How do you decide what to
keep/delete?
Where/how are you going to
preserve your data?
37. Preserving your data 〉 1. Information in the cloud
EMC (2012). The
digital universe in
2020: big data,
bigger digital
shadows, and
biggest growth in the
Far East. Available
at
http://www.emc.com/lea
(retrieved 14
January 2014).
38. Preserving your data 〉 2. What to keep/delete
• Does your funder/university needs to keep data and /or make it
available for a certain amount of time?
• Is the data a vital record of a project/organisation/consortium and
therefore needs to be retained indefinitely?
• Do you have the legal and intellectual property rights to keep and
re-use the data? If not, can these be negotiated?
• Does sufficient metadata exist to allow data to be found wherever
it is stored?
• If you need to pay to keep the data, can you afford it?
• Only store what you need to keep! Storage costs money and/or effort
and storing massive amounts of data require a well thought plan to
organize it so that information is easily found;
39. Further information
The University Computing Services provides up to 500 MB of
centralised file storage space through the public workstation facility
(PWF), which also allows you to store and access files online.
Some colleges/departments/research groups provide networked
storage (ask your local computing officers for details).
Digital Curation Centre The value of digital curation
UK Data Archive FAQ
Engineering Research Information Management Project (ERIM)
National Preservation Office Caring for CDs and DVDs
Wikipedia List of backup software
Wikipedia Comparison of online back-up services
Preserving your data 〉 3. Storage
40. Preserving your data 〉 4. Long-term storage
• Digital repositories
Provide online archival storage – usually open access – and care for
digital materials, ensuring that they remain readable for as long as the
repository survives.
e.g. Dspace@Cambridge
• Archive/data center
Ensure data safe-keeping in the long term: datasets are fully documented
with all bibliographical details and users of the data are aware of the
need to acknowledge the data sources in publications.
e.g. Archaeology Data Service
41. Digital Curation Centre.
(cop. 2004-2014). DCC
curation lifecycle model
[image]. Available at
http://www.dcc.ac.uk/res
ources/curation-lifecycle-
model (retrieved 17
February 2014).
Summary
43. Should you share your data/research?
Are there impediments to sharing
data/research?
Do you have/need a marketing plan to
publicise your research?
44. • Scientific integrity - publishing your data and citing its location in
published research papers can allow others to replicate, validate, or
correct your results, thereby improving the scientific record.
• Funding mandates - UK research councils are increasingly mandating
data sharing so as to avoid duplication of effort and save costs.
• Raise/Increase the impact of your research - those who make use of
your data and cite it in their own research will help to increase your
impact within your field and beyond it.
• Preserve your data for future use – anyone can benefit by being able
to identify, retrieve, and understand the data yourself after you have lost
familiarity with it, perhaps several years hence.
Market your data 〉 1. Reasons to share
45. • Teaching purposes - your data may be ideal for others to learn how to
collect and analyse similar types of data themselves.
• Making publicly funded research available publicly - there is a
growing movement for making publicly funded research available to the
public, as indicated for example, in the Organisation for Economic Co-
operation and Development (OECD) Principles and Guidelines for
Access to Research Data from Public Funding.
• Increase transparency through creating, disseminating and curating
knowledge.
• Increase collaboration - the use of archived data by other researchers
may lead to with the data owner and to co-authorship of publications
based on re-use of the data.
Market your data 〉 1. Reasons to share
46. • If your data has financial value or is the basis for potentially valuable
patents that could be exploited by the University, it may be unwise to
share it, even with a data licence or terms and conditions attached.
• If the data contains sensitive, personal information about human
subjects, it may violate the Data Protection Act, ethics codes, or your
own written consent forms to share it, even with other researchers.
(often there are ways to anonymise the data to remove the personally
identifying information from it, thus making it sharable as a public use
dataset).
• If parts of the data are owned by others, such as commercial entities
or authors, then even if you have derived wholly new data from the
original sources you may not have the rights to share the data with
others.
Market your data 〉 2. Reasons not to share
47. • Publish in Open Access journals or deposit a copy into
DSpace@Cambridge;
• Enhance your online presence though social media (Facebook,
Twitter, start and maintain a blog);
• Use author identification (researcherID from Web of Science; Scopus
ID, ORCID);
• Share research in ”academic” platforms (LinkedIn, Academia.edu,
ResearchGate, Microsoft Academic Search, Mendeley);
• Keep track of different metric statistics (number of citations);
Market your data 〉 2. How do you market?
48. Market your data 〉 2. How do you market?
Further information
Digital Curation Centre Overview of major funders’ data policies
SHERPA JULIET searchable international database of funders'
open access and archiving requirements
Times Higher Education supplement "Research intelligence -
Request hits a raw spot" (15 July 2010)
Dspace@Cambridge
DOAJ – Directory of Open Access Journals (with information on
OA journal preservation program and OA quality standards
OAD – Open Access Directory
52. Department of Engineering, Library and Information Service
cued-library@eng.cam.ac.uk
Telephone: +44 1223 332626
Editor's Notes
Referencing is presenting the details of a publication so that it can be unequivocally identified. Data to be included depends on the type of source (book, article, website) and on the reference style being used (IEEE, Harvard, Oxford, APA…).
“A reference is a springboard to new knowledge or a new perspective on your topic; it’s part of a discovery pathway, a way for you to reconstruct how an author got to a certain point of view, what influenced them and the development of their thinking.” (Coonan, 2013)
Our Curation Lifecycle Model provides a graphical, high-level overview of the stages required for successful curation and preservation of data from initial conceptualisation or receipt through the iterative curation cycle.
You can use our model to plan activities within your organisation or consortium to ensure that all of the necessary steps in the curation lifecycle are covered.
- See more at: http://www.dcc.ac.uk/resources/curation-lifecycle-model#sthash.SIim2HKv.dpuf