A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
A basic course on Research data management, part 3: sharing your dataLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
This document provides guidance on managing research data. It discusses planning ahead by considering data needs, formats, volume and ethics. It also covers organizing data through file naming, metadata, references, remote access and safekeeping. Preserving data involves determining what to keep/delete and using long-term storage such as repositories. Reasons for sharing data include scientific integrity, funding mandates and increasing impact, while reasons for not sharing include financial or sensitive personal information.
The document discusses file management concepts including file structures, directories, file allocation methods, and access rights. It describes common file structures like sequential, indexed sequential, and direct files. It also covers directory structures, file sharing concepts like simultaneous access and access rights, and secondary storage management techniques like preallocation and allocation methods.
All information in a file is always in binary form or a series of ones and zeros. A document includes any file you have created. It can be a true text document, sound file, graphics, images, or any other type of information the computer can create, store, or size from the internet.
Data Management in the context of Open Science.
Because open access become mandatory for publications and project-funded research data, it is the responsibility of each researcher to be informed and then trained in new practices.
This document defines key concepts related to computer files. It discusses:
1. File organization types including serial, sequential, direct access, and indexed sequential. Sequential files store records in key sequence while direct access allows direct retrieval by calculating a record's address.
2. Methods of accessing files which can be serial, sequential, or direct/random.
3. Criteria for classifying files as master, transactional, or reference files based on their content, organization, and storage medium.
4. An assignment to research operating procedures for computer data processing.
Good (enough) research data management practicesLeon Osinski
Slides of a lecture on research data management (RDM), given for 3rd year students (Eindhoven University of Technology, major Psychology & Technology), as part of the course 0HV90 Quantitative Research. At the end of the slides a handy summary 'Research data management basics in a nutshell' is added.
A basic course on Research data management, part 3: sharing your dataLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
This document provides guidance on managing research data. It discusses planning ahead by considering data needs, formats, volume and ethics. It also covers organizing data through file naming, metadata, references, remote access and safekeeping. Preserving data involves determining what to keep/delete and using long-term storage such as repositories. Reasons for sharing data include scientific integrity, funding mandates and increasing impact, while reasons for not sharing include financial or sensitive personal information.
The document discusses file management concepts including file structures, directories, file allocation methods, and access rights. It describes common file structures like sequential, indexed sequential, and direct files. It also covers directory structures, file sharing concepts like simultaneous access and access rights, and secondary storage management techniques like preallocation and allocation methods.
All information in a file is always in binary form or a series of ones and zeros. A document includes any file you have created. It can be a true text document, sound file, graphics, images, or any other type of information the computer can create, store, or size from the internet.
Data Management in the context of Open Science.
Because open access become mandatory for publications and project-funded research data, it is the responsibility of each researcher to be informed and then trained in new practices.
This document defines key concepts related to computer files. It discusses:
1. File organization types including serial, sequential, direct access, and indexed sequential. Sequential files store records in key sequence while direct access allows direct retrieval by calculating a record's address.
2. Methods of accessing files which can be serial, sequential, or direct/random.
3. Criteria for classifying files as master, transactional, or reference files based on their content, organization, and storage medium.
4. An assignment to research operating procedures for computer data processing.
Good (enough) research data management practicesLeon Osinski
Slides of a lecture on research data management (RDM), given for 3rd year students (Eindhoven University of Technology, major Psychology & Technology), as part of the course 0HV90 Quantitative Research. At the end of the slides a handy summary 'Research data management basics in a nutshell' is added.
This document discusses different file organization techniques for conventional database management systems. It describes sequential file organization where records are stored consecutively. Indexed sequential file organization is introduced to improve query response time for sequential files by adding an index. Direct file organization and multi-key file organization are also mentioned, which allow accessing records using different keys. Trade-offs among these techniques are discussed.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
This document discusses best practices for organizing, managing, and publishing research data. It recommends using standardized file naming and folder structures, documenting data through code books and metadata, selecting open formats, and considering issues like data security, versions, and citations. FAIR principles of findable, accessible, interoperable and reusable data are presented. Options in Finland for publishing and archiving research data include repositories like FSD Tietoarkisto and Zenodo. Adopting these practices helps ensure well-organized, documented data that can enable reproducibility and reuse.
This document summarizes a course on data archiving and processing. The course covers archiving theory and practices, data structures and processing, survey documentation, and user guides. It discusses different types of survey designs including cross-sectional, panel, cohort, and retrospective designs. It provides examples of deriving variables, creating analytic files, data linkage for hierarchical and longitudinal data, and exercises for participants. The intended outcomes are understanding the need to archive data, differentiating data types and structures, using software to process and analyze data, and appreciating user guides.
Data carving using artificial headers info sec conferenceRobert Daniel
This document proposes a new approach to data carving called File Recovery using Artificial Headers (FRAH) that can recover files with corrupted or missing headers. An evaluation of existing data carving tools found they have difficulty recovering fragmented files. FRAH works by inserting an artificial header onto files to circumvent missing headers. Testing showed FRAH could successfully recover files that standard tools could not. However, FRAH has limitations in recovering files where payload data is also missing. Further research is needed to make FRAH more robust.
David Shotton - Research Integrity: Integrity of the published recordJisc
The document discusses several issues related to publishing research data and proposes solutions to address them. It describes projects that aim to make it easier for researchers to publish, archive, cite and reuse research data. This includes developing metadata standards, data repositories, and publishing data citations as linked open data to improve data discovery and attribution.
The document discusses file design and organization in information systems. It describes the key components of files, including data items, records, record keys, and entities. It explains different file organizations like sequential, direct access, indexed, and inverted files. It also discusses designing printed outputs, including determining output objectives, contents, layout, and appropriate output media.
Research data management: course OGO Quantitative research (21-11-2018)Leon Osinski
Research data management involves three key aspects: 1) protecting data through organized file naming and folder structures, 2) sharing data via collaboration platforms or archives to enable reproducibility and reuse, and 3) caring for data through tidy formatting, thorough metadata and documentation, and use of open standards to ensure understandability and usability.
This document provides an agenda for Part II of an SPP 2089 data management training. The agenda includes topics such as troubleshooting common data upload issues, improving dataset quality, and attaching metadata to data. Techniques for updating datasets, ensuring data consistency and completeness, linking related datasets, and adding explanatory information to datasets are discussed. The training emphasizes using the BEXIS2 data management platform to properly store, organize, and document research data over the full data lifecycle in accordance with SPP 2089 guidelines.
The document discusses different types of file organization including sequential, random, indexed sequential, and multikey organization. It describes the key aspects of each type including how records are stored and accessed. The document also outlines different types of files such as master files, transaction files, and control files along with examples and characteristics of each.
This document outlines an agenda for a data management training session. The full-day session will cover basics in the morning, advanced topics after lunch, and end with a question and answer period and required homework. Attendees will learn about account creation and login procedures for various research platforms, file labeling standards, and data management best practices including uploading, downloading, sharing and archiving data throughout its lifecycle. The document provides details on specific topics to be covered as well as templates and guidelines for research activities like field and column experiments.
This document discusses primary and secondary storage. Secondary storage is used for permanent storage of data in files and has greater storage capacity than primary storage. A file contains records with fields, and each record is uniquely identified by a key field like student ID. Logical files connect programs to physical files on secondary storage. Files can be accessed sequentially, randomly using indexing, or directly using the key value.
Ruth Duerr, data scientist and steward at the National Snow & Ice Data Center, CIRES and CU-Boulder, describes the new data citation policy for American Geophysical Union (AGU) journals. She shows examples of each part of a good citation, and answers questions about where to house data.
This document summarizes Mercè Crosas's presentation on the expanding dataverse and advances in data publishing. It discusses the growth of digital data and need for data citation, repositories, and metadata to make data discoverable, accessible, and reusable. The Dataverse software provides a framework for publishing data across different repository types. Recent improvements allow for rigorous data citation compliant with principles, rich metadata, support for public and restricted data, and publication workflows. Future areas of focus include integration with other systems, support for sensitive data, and expanding data citation and APIs.
Report blocking ,management of files in secondry memory , static vs dynamic a...NoorMustafaSoomro
Three key topics were discussed:
1) Record blocking - the process of grouping related data records into blocks for storage. Fixed, variable-length spanned, and unspanned blocking methods were described.
2) File management in secondary memory - files have attributes like name, size, permissions. Common file operations are create, open, read, write. Directory structures and access paths organize files.
3) Memory allocation - static allocation assigns memory at compile time while dynamic allocation occurs at runtime using functions like malloc(), free(), calloc(), realloc(). Contiguous, linked lists and indexing are approaches to storing files and managing free space.
This document discusses various aspects of file systems including:
1. It defines what a file is and lists some common file attributes like name, size, and timestamps.
2. It describes different file operations like create, read, write, delete and different methods to access and store files like sequential, random, and index access.
3. It discusses file system implementation techniques like contiguous allocation, linked lists, and i-nodes and how free space is managed through approaches like bitmaps and linked lists.
This world have numerous kinds and diversity .This kinds and diversity remain in whole world two two third is aquatic ,fresh water and marine water .This kinds and diversites knowledge and their total knowledge file management is very importance for fisheries science.
This freshwater and marine water has a huge number of vertebrate and invertebrate animals and plants. Thair identify and use is vary importance for fisheries and aquaculture .for that their proper file management is play a useful role in fisheries and aquaculture.
If we went to know the total plant and animals this is not possible to proper file management.
Culturable species and there predator knowledge and file management is vary importance for aquaculture .culturable species habitats and their food habit is very importance for successful aquaculture and also importance in breeding season and behavior and high growth rate fish data .There proper management and for fisheries student study documents is very important. So file management is very importance in fisheries science.
This document provides a user guide for those new to Hadoop who will be working with the Hadoop Distributed File System (HDFS). It introduces HDFS and explains what it is and how it works. It describes how data is stored across clusters and nodes. It then provides information on how users can access and interact with their files on HDFS, including using shell commands or the Hue interface. It explains how data is organized and stored at different access levels. The goal is for users to understand how to store and retrieve their files using HDFS.
Research Data Management: Part 1, Principles & ResponsibilitiesAmyLN
This two-part course is a collaboration between CU Libraries/Information Services and the Office of Research Compliance & Training. The purpose of this course is to familiarize you with the various aspects of research data management (RDM)
Part 1: Why RDM is both recommended and required
What research data are
Who is responsible for RDM
Part 2:
When RDM activities occur
How you can carry out RDM activities
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
This document discusses different file organization techniques for conventional database management systems. It describes sequential file organization where records are stored consecutively. Indexed sequential file organization is introduced to improve query response time for sequential files by adding an index. Direct file organization and multi-key file organization are also mentioned, which allow accessing records using different keys. Trade-offs among these techniques are discussed.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
This document discusses best practices for organizing, managing, and publishing research data. It recommends using standardized file naming and folder structures, documenting data through code books and metadata, selecting open formats, and considering issues like data security, versions, and citations. FAIR principles of findable, accessible, interoperable and reusable data are presented. Options in Finland for publishing and archiving research data include repositories like FSD Tietoarkisto and Zenodo. Adopting these practices helps ensure well-organized, documented data that can enable reproducibility and reuse.
This document summarizes a course on data archiving and processing. The course covers archiving theory and practices, data structures and processing, survey documentation, and user guides. It discusses different types of survey designs including cross-sectional, panel, cohort, and retrospective designs. It provides examples of deriving variables, creating analytic files, data linkage for hierarchical and longitudinal data, and exercises for participants. The intended outcomes are understanding the need to archive data, differentiating data types and structures, using software to process and analyze data, and appreciating user guides.
Data carving using artificial headers info sec conferenceRobert Daniel
This document proposes a new approach to data carving called File Recovery using Artificial Headers (FRAH) that can recover files with corrupted or missing headers. An evaluation of existing data carving tools found they have difficulty recovering fragmented files. FRAH works by inserting an artificial header onto files to circumvent missing headers. Testing showed FRAH could successfully recover files that standard tools could not. However, FRAH has limitations in recovering files where payload data is also missing. Further research is needed to make FRAH more robust.
David Shotton - Research Integrity: Integrity of the published recordJisc
The document discusses several issues related to publishing research data and proposes solutions to address them. It describes projects that aim to make it easier for researchers to publish, archive, cite and reuse research data. This includes developing metadata standards, data repositories, and publishing data citations as linked open data to improve data discovery and attribution.
The document discusses file design and organization in information systems. It describes the key components of files, including data items, records, record keys, and entities. It explains different file organizations like sequential, direct access, indexed, and inverted files. It also discusses designing printed outputs, including determining output objectives, contents, layout, and appropriate output media.
Research data management: course OGO Quantitative research (21-11-2018)Leon Osinski
Research data management involves three key aspects: 1) protecting data through organized file naming and folder structures, 2) sharing data via collaboration platforms or archives to enable reproducibility and reuse, and 3) caring for data through tidy formatting, thorough metadata and documentation, and use of open standards to ensure understandability and usability.
This document provides an agenda for Part II of an SPP 2089 data management training. The agenda includes topics such as troubleshooting common data upload issues, improving dataset quality, and attaching metadata to data. Techniques for updating datasets, ensuring data consistency and completeness, linking related datasets, and adding explanatory information to datasets are discussed. The training emphasizes using the BEXIS2 data management platform to properly store, organize, and document research data over the full data lifecycle in accordance with SPP 2089 guidelines.
The document discusses different types of file organization including sequential, random, indexed sequential, and multikey organization. It describes the key aspects of each type including how records are stored and accessed. The document also outlines different types of files such as master files, transaction files, and control files along with examples and characteristics of each.
This document outlines an agenda for a data management training session. The full-day session will cover basics in the morning, advanced topics after lunch, and end with a question and answer period and required homework. Attendees will learn about account creation and login procedures for various research platforms, file labeling standards, and data management best practices including uploading, downloading, sharing and archiving data throughout its lifecycle. The document provides details on specific topics to be covered as well as templates and guidelines for research activities like field and column experiments.
This document discusses primary and secondary storage. Secondary storage is used for permanent storage of data in files and has greater storage capacity than primary storage. A file contains records with fields, and each record is uniquely identified by a key field like student ID. Logical files connect programs to physical files on secondary storage. Files can be accessed sequentially, randomly using indexing, or directly using the key value.
Ruth Duerr, data scientist and steward at the National Snow & Ice Data Center, CIRES and CU-Boulder, describes the new data citation policy for American Geophysical Union (AGU) journals. She shows examples of each part of a good citation, and answers questions about where to house data.
This document summarizes Mercè Crosas's presentation on the expanding dataverse and advances in data publishing. It discusses the growth of digital data and need for data citation, repositories, and metadata to make data discoverable, accessible, and reusable. The Dataverse software provides a framework for publishing data across different repository types. Recent improvements allow for rigorous data citation compliant with principles, rich metadata, support for public and restricted data, and publication workflows. Future areas of focus include integration with other systems, support for sensitive data, and expanding data citation and APIs.
Report blocking ,management of files in secondry memory , static vs dynamic a...NoorMustafaSoomro
Three key topics were discussed:
1) Record blocking - the process of grouping related data records into blocks for storage. Fixed, variable-length spanned, and unspanned blocking methods were described.
2) File management in secondary memory - files have attributes like name, size, permissions. Common file operations are create, open, read, write. Directory structures and access paths organize files.
3) Memory allocation - static allocation assigns memory at compile time while dynamic allocation occurs at runtime using functions like malloc(), free(), calloc(), realloc(). Contiguous, linked lists and indexing are approaches to storing files and managing free space.
This document discusses various aspects of file systems including:
1. It defines what a file is and lists some common file attributes like name, size, and timestamps.
2. It describes different file operations like create, read, write, delete and different methods to access and store files like sequential, random, and index access.
3. It discusses file system implementation techniques like contiguous allocation, linked lists, and i-nodes and how free space is managed through approaches like bitmaps and linked lists.
This world have numerous kinds and diversity .This kinds and diversity remain in whole world two two third is aquatic ,fresh water and marine water .This kinds and diversites knowledge and their total knowledge file management is very importance for fisheries science.
This freshwater and marine water has a huge number of vertebrate and invertebrate animals and plants. Thair identify and use is vary importance for fisheries and aquaculture .for that their proper file management is play a useful role in fisheries and aquaculture.
If we went to know the total plant and animals this is not possible to proper file management.
Culturable species and there predator knowledge and file management is vary importance for aquaculture .culturable species habitats and their food habit is very importance for successful aquaculture and also importance in breeding season and behavior and high growth rate fish data .There proper management and for fisheries student study documents is very important. So file management is very importance in fisheries science.
This document provides a user guide for those new to Hadoop who will be working with the Hadoop Distributed File System (HDFS). It introduces HDFS and explains what it is and how it works. It describes how data is stored across clusters and nodes. It then provides information on how users can access and interact with their files on HDFS, including using shell commands or the Hue interface. It explains how data is organized and stored at different access levels. The goal is for users to understand how to store and retrieve their files using HDFS.
Research Data Management: Part 1, Principles & ResponsibilitiesAmyLN
This two-part course is a collaboration between CU Libraries/Information Services and the Office of Research Compliance & Training. The purpose of this course is to familiarize you with the various aspects of research data management (RDM)
Part 1: Why RDM is both recommended and required
What research data are
Who is responsible for RDM
Part 2:
When RDM activities occur
How you can carry out RDM activities
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
A basic course on Research data management: part 1 - part 4Leon Osinski
Slides belonging to a basic course on research data management. The course consists of 4 parts:
Part 1: what and why
1.1 data management plans
Part 2: protecting and organizing your data
2.1 data safety and data security
2.2 file naming, organizing data (TIER documentation protocol)
Part 3: sharing your data
3.1 via collaboration platforms (during research)
3.2 via data archives (after your research)
Part 4: caring for your data, or making data usable
4.1 tidy data
4.2 documentation/metadata
4.3 licenses
4.4 open data formats
Research tools & data collection method_vipinVIPIN PATIDAR
data collection method-
it include following sub points-
1) definition of research tool
2) data
3) primary and secondary data
4) observation method
5) interview
6) questionnaire
7) physiological measure
Survey around Semantics for Programming Languages, and Machine Proof using Coqbellbind
The document surveys semantics for programming languages and machine proof using Coq. It discusses various type systems for lambda calculus, encoding styles for target languages, and proving properties of programming languages using Coq. The author aims to continue their work on definitional interpreters, gradual typing, and learning techniques for defining languages with proofs in Coq.
This document discusses subprograms and parameter passing in programming languages. It covers fundamental concepts of subprograms like definitions, calls, headers, and parameters. It then describes different parameter passing methods like pass-by-value, pass-by-reference, and pass-by-name. It also discusses how major languages like C, C++, Java, Ada, C#, and PHP implement parameter passing and type checking.
This presentation discusses primary and secondary data collection methods. It begins by defining primary data as original data collected specifically for the research purpose, such as through surveys and interviews. Secondary data refers to data previously collected by others, such as published sources. Both data types are useful but have tradeoffs - primary data directly addresses the research question while secondary data is easier to obtain but may not be specific. The presentation provides examples of primary and secondary data collection techniques and their respective advantages and disadvantages.
The data science process document outlines the typical steps involved in a data science project including: 1) setting research goals, 2) retrieving data from internal or external sources, 3) preparing data through cleansing and transformation, 4) performing exploratory data analysis, 5) building models using techniques like machine learning or statistics, and 6) presenting and automating results. It also discusses challenges in working with different file formats and the importance of understanding various formats as a data scientist.
This document discusses file management concepts including files, file attributes, file operations, file types, file structure, and access methods. Key points include:
- Files represent named collections of related information stored on secondary storage.
- File attributes include name, identifier, type, location, size, protection, and time/date information.
- Basic file operations are creating, writing, reading, repositioning, deleting, and truncating files.
- File types include ordinary files, directory files, and special files which represent devices.
- File structure and access methods like sequential, direct, and indexed access determine how information is organized and retrieved from files.
This document provides an overview of file handling in QBASIC. It discusses writing, reading, updating, and deleting records from external data files. It describes the OUTPUT, INPUT, and APPEND file modes used in QBASIC and defines program and data files. Syntax for the WRITE and INPUT commands to write and read from data files is shown. An example program is provided that writes a student's name, class, and roll number to an external file called "std.txt" by getting input from the user.
This document provides an overview of file management and file systems. It discusses the basic components of a file including fields, records, files, and databases. It describes common file organization types like sequential, indexed sequential, indexed, and direct files. It also explains the basic components and objectives of a file management system. Finally, it covers B-trees, which are a balanced tree structure commonly used to organize indexes in databases and file systems to provide efficient searching, insertion, and deletion of items.
This document outlines key concepts related to data processing including:
- Data refers to facts and observations represented by symbols. Data processing manipulates data to transform it into useful information.
- Data processing activities include tools to convert data into information, from manual to electronic tools.
- The data processing cycle includes input, processing, output, and storage steps.
- Data hierarchy shows the arrangement of data from fields to records to files to databases.
This document discusses best practices for data organization, documentation, and metadata. It recommends using open standard file formats that will remain readable over time, consistent file naming conventions with descriptive names, and version control for files. Metadata should include descriptive, technical, and administrative information to document the data and ensure it can be understood and managed. Good documentation involves information on the data collection process and dataset structure.
This presentation defines key concepts about files and databases. It explains that files are collections of data organized and stored by operating systems, and that effective filing systems allow for easy retrieval. The presentation describes how files are named with a name and extension, and identifies common file types based on extensions like docx, jpg, and html. It also defines databases as organized collections of data that can be easily retrieved, and notes that database management systems help control redundancy, maintain reliability, restrict access, share data, and backup or recover information.
This presentation defines key concepts about files and databases. It explains that files are collections of data organized and stored by operating systems, and that effective filing systems allow for easy retrieval. The presentation describes how files are named with a name and extension, and identifies common file types based on extensions like docx, jpg, and html. It also defines databases as organized collections of data that can be easily retrieved, and notes that database management systems help control redundancy, maintain reliability, restrict access, share data, and backup or recover information.
This presentation defines key concepts about files and databases. It explains that files are collections of data organized and stored by operating systems, and that effective filing systems allow for easy retrieval. The presentation describes how files are named with a name and extension, and identifies common file types based on extensions like docx, jpg, and html. It also defines databases as organized collections of data that can be easily retrieved, and notes that database management systems help control redundancy, maintain reliability, restrict access, share data, and backup or recover information.
Learn about the File Concept in operating systems pptgeethasenthil2706
A file is the smallest unit of storage on a computer system. It provides a logical view of information stored on disks. A file contains a sequence of bits, bytes, or records that are defined by the file owner. Common file operations include opening, reading, writing, closing, and deleting files. The operating system tracks attributes like the file name, size, location, and access rights to manage file input/output requests from processes. File types help the operating system recognize different categories of files like text, source code, and binary files.
This document discusses distributed file systems and Network File System (NFS). It begins with an overview of distributed file system requirements including transparency, performance, scalability, fault tolerance and security. It then describes the general file service architecture with client, directory server and file server modules. The document outlines the NFS architecture and how it uses remote procedure calls for communication. It explains how NFS implements client-side caching and handles consistency across clients and the server. Finally, it briefly discusses the NFS mounting and file access protocols.
Useful documents for engineering students of CSE, and specially for students of aryabhatta knowledge university, Bihar (A.K.U. Bihar). It covers following topics, File concept, access methods, directory structure
Degonto, File management system in fisheries scienceDegonto Islam
File management is an important part of fisheries management. It involves organizing files related to fisheries into directories and subdirectories on computers in an efficient way. This allows important fisheries data, which can amount to terabytes, to be easily stored, named, and retrieved. Files are typically organized in a hierarchical file system with drives, folders, and subfolders. Proper file naming conventions and restrictions on file names are followed. Files can be sorted, copied, deleted, and backed up. Keeping files secure involves locking them, using strong passwords, and creating backups in separate locations.
This document provides an overview of a workshop on good practice in research data management held at the University of Tartu, Estonia. The workshop covered various topics including defining research data, research data management and data management plans, organizing and documenting data, file formats and storage, metadata, security, and sharing and preserving data. The workshop was led by Stuart Macdonald from the University of Edinburgh and included presentations, introductions, and discussions around each of these research data management topics.
The document discusses configuring files and filegroups in SQL Server. It describes how SQL Server uses data files to store database contents and transaction log files to store transactions. It also discusses filegroups, which map database objects to files on disk. The document outlines the types of file extensions (.mdf, .ndf, .ldf) used and how the proportional fill algorithm works. It recommends best practices for configuring files and filegroups when creating a new database. The document also briefly discusses FILESTREAM, the tempdb database, and file naming conventions.
File organisation in system analysis and designMohitgauri
This document provides an overview of different file organization strategies, including heap files, sequential files, indexed sequential files, inverted list files, and direct files. It discusses the key characteristics of each method, such as how records are stored and accessed. The main advantages and disadvantages of each approach are also summarized. Some key points covered include that sequential files are best for sequential processing but slow for random access, while direct files allow very fast random access but require more complex hardware and software. The document aims to help readers understand different options for structuring computer files.
This document provides information on file handling and dictionaries in Python. It discusses file paths, opening and closing files, reading from and writing to files. It also covers creating, accessing, adding, updating and deleting elements in dictionaries. Finally, it discusses directory methods like getcwd(), chdir(), listdir(), mkdir(), rmdir() and rename() for working with directories in Python.
File organization uses storage, organization, and access of data stored in files. There are two main types of file organization: sequential and multitable clustering. Sequential organization stores records in order of a search key, while multitable clustering stores related records from different relations together to minimize disk accesses. Proper file organization is important for database efficiency. Common file functions in C include fopen(), fclose(), fread(), fwrite(), getc(), putc(), getw(), and putw() to open, close, read, write, and access data in text and binary files.
Similar to A basic course on Reseach data management, part 2: protecting and organizing your data (20)
PROOF course Writing articles and abstracts in English, part: Copyright in ac...Leon Osinski
For this presentation students need to have seen 5 web lectures on copyright. During the presentation, the knowledge gained by the students by looking at the web lectures will be tested on the basis of a number of practical questions.
What funders want you to do with your dataLeon Osinski
Funders want researchers to 1) deposit the relevant data from their research in an approved repository to make it FAIR (Findable, Accessible, Interoperable, Reusable), 2) make the data openly available whenever possible, and 3) write a Data Management Plan describing how they will manage their data during and after the project. Funders require depositing data in repositories to enable reuse, making data open access "as open as possible, as closed as necessary", and having a Data Management Plan that addresses reuse according to FAIR principles.
Research data management at TU EindhovenLeon Osinski
The document discusses research data management at TU Eindhoven. It outlines the long process of developing RDM practices since 2008. It describes the current organization and governance structure for RDM. Key external requirements for RDM from funders, regulations, and integrity standards are also summarized. The document concludes by outlining RDM support services available and the benefits of good RDM practices.
The document discusses the use of Creative Commons licenses for research data. It notes that funders and universities are pushing for open access to research articles and data. However, applying a CC BY license fully transfers copyright to the public domain. For data, researchers must ensure they own the copyright and are authorized to license it. Less restrictive licenses like CC BY-NC still allow commercial reuse with permission. The document debates finding a balance between open access and allowing researchers to control dissemination and potential rewards from their data.
Be open: what funders want you to do with your publications and research dataLeon Osinski
Research funders want researchers to:
1. Publish research articles through open access to make the articles widely available.
2. Deposit the underlying research data in repositories to make the data findable, accessible, interoperable, and reusable (FAIR).
3. Attach open licenses like CC BY to both publications and data to allow for commercial reuse when possible.
3TU.Datacentrum: presentation for OpenML Workshop (III) at Eindhoven, 22-10-2...Leon Osinski
This document discusses sharing and reusing research data. It explains that sharing data is expected by funders and important for reproducibility, reusing results, and increasing visibility. To be reusable, data should be findable, accessible, intelligible, interoperable, and preserved. The 3TU.Datacentrum assists with assigning DOIs for citation, makes data openly accessible with some embargo options, and ensures long-term preservation. DOIs are assigned through DataCite Netherlands, which research organizations can register with for a fee.
Horizon 2020 and research data : info meeting Horizon 2020 @ TUe, 07-10-2014 ...Leon Osinski
This document discusses research data management (RDM) and the open data pilot program in Horizon 2020. It provides information on why RDM is important, noting key stakeholders that expect data sharing, and how RDM enables data re-use and integrity. The Horizon 2020 open data pilot program is described, including the seven research areas included in the pilot and funder requirements for a Data Management Plan and depositing data in repositories. Guidance and support resources for participating in the open data pilot are also listed.
Copyright and citation issues : PROOF course Writing articles and abstracts /...Leon Osinski
As an author of scholarly papers, you will use in your paper materials (text fragments, picture, tables, figures) of other people. In most cases this material is copyright-protected which means that in most cases, not always, you have to ask permission to re-use that material and to attribute the source of the material. This is also the first topic of this lecture: you as a user of copyright-protected material.
In the second place, when you’re done writing you want to publish your paper in a journal. In most cases, not always, this goes with a transfer of the copyright that you initially own to a publisher. Transfer of copyright has some consequences and this is the second topic of this presentation: you as a producer of copyright-protected material.
Onderzoeksdata-bepalingen van financiers van universitair onderzoek in NL: Ma...Leon Osinski
Onderzoeksdata-bepalingen van financiers van universitair onderzoek in NL : presentatie Master Class Research Data Management in Nederland, Maastricht, 3/4 april 2014.
UKB Werkgroep Datamanagement,Voorwaarden van Financiers.
Maarten van Bentum, Henk van den Hoogen, Leon Osinski
Research data management during and after your research ; an introduction / L...Leon Osinski
This document outlines a workshop on research data management for PhD students. The workshop covers managing data during research to ensure integrity and allow replication, as well as archiving or publishing data after research. During the workshop, presentations will discuss scientific integrity and data management during research, and data management after research. Discussions will explore topics like dealing with failed experiments, accessibility of data during research, and archiving data after a project is finished. The goal is to provide insight on responsible data practices during and after research.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Walmart Business+ and Spark Good for Nonprofits.pdf
A basic course on Reseach data management, part 2: protecting and organizing your data
1. A basic course on Research data management
part 2: protecting and organizing
your data
PROOF course Information Literacy and
Research Data Management
TU/e, 24-01-2017
l.osinski@tue.nl, TU/e IEC/Library
Available under CC BY-SA license, which permits copying
and redistributing the material in any medium or format &
adapting the material for any purpose, provided the original
author and source are credited & you distribute the
adapted material under the same license as the original
2. Research data management
Sharing your data, or making your data findable and accessible
with good data practices
→ protecting your data: back up, access control; file naming, organizing
data, versioning
+ sharing your data via collaboration platforms and archives
Caring for your data, or making your data re-usable and
interoperable with good data practices
+ metadata, tidy data, licenses
Research data management
what was it again
3. Be safe
+ storage, backup data safety, protecting against loss: use local
ICT infrastructure (including SURFdrive) as much as possible
+ access control data security, protecting against unauthorized
use: with DataverseNL for example
Be organized, or: you should be able to tell what’s in a file
without opening it
+ file-naming, organizing data in folders, versioning,
+ data classification and retention; different treatment of different
data (raw versus processed data)
Protecting your data
good data practices during your research
“…we can copy everything and do not manage it well.” (Indra Sihar)
4. File-naming #1
be consistent and aim for concise but informative names
Good file names are consistent (use file-naming
conventions), unique (distinguishes a file from files with
similar subjects as well as different versions of the file)
and meaningful (use descriptive names).
File-naming conventions help you find your data, help
others to find your data and help track which version of
a file is most current
Avoid using special characters in a file name: / : * ? < >
| [ ] & $
Use underscores instead of periods or spaces to
separate logical elements in a file name
Avoid very long names: usually 25 characters is sufficient
length
Names should include all necessary descriptive
information independent of where it is stored
Include dates and a version number on files
Add a readme.txt to each folder in which the file naming
and its meaning is explained
Source: File naming conventions
5. File naming #2
think about the ordering of elements within a filename
Order by date:
2013-04-12_interview-recording_THD.mp3
2013-04-12_interview-transcript_THD.docx
2012-12-15_interview-recording_MBD.mp3
2012-12-15_interview-transcript_MBD.docx
Order by subject:
MBD_interview-recording_2012-12-15.mp3
MBD_interview-transcript_2012-12-15.docx
THD_interview-recording_2013-04-12.mp3
THD_interview-transcript_2013-04-12.docx
Order by type:
Interview-recording_MBD_2012-12-15.mp3
Interview-recording_THD_2013-04-12.mp3
Interview-transcript_MBD_2012-12-15.docx
Interview-transcript_THD_2013-04-12.docx
Forced order with numbering:
01_THD_interview-recording_2013-04-12.mp3
02_THD_interview-transcript_2013-04-12.docx
03_MBD_interview-recording_2012-12-15.mp3
04_MBD_interview-transcript_2012-12-15.docx
<
6. File organization
PAGE 631-1-2017
<
Source: Beatriz Ramirez, Data management plan for the PhD project:
development and application of a monitoring system to assess the
impacts of climate and land cover changes on eco-hydrological
processes in an eastern Andes catchment area
Source: Haselager, dr. G.J.T.
(Radboud University Nijmegen);
Aken, prof. dr. M.A.G. van (Utrecht
University) (2000): Personality and
Family Relationships. DANS.
http://dx.doi.org/10.17026/dans-
xk5-y7vc .
7. Organizing your data in folders #1
based on the TIER documentation protocol (http://www.projecttier.org/)
1. Main project folder (name of your research project/working title of your
paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
1.1.2.1. Supplements
1.2. Processing and analysis files
1.2.1. Importable data files
1.2.2. Command files
1.2.3. Analysis files
1.3. Documents
8. 1. Main project folder (name of your research project/working title of your
paper)
1.1. Original data and metadata
1.1.1. Original data (keep these read only)
Any data that were necessary for any part of the processing
and/or analysis you reported in you paper.
Copies of all your original data files, saved in exactly the
format it was when you first obtained it. The name of the
original data file may be changed
1.1.2. Metadata
1.1.2.1. Supplements
Organizing your data in folders #2
based on the TIER documentation protocol
9. 1. Main project folder (name of your research project/working title of your paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
The Metadata Guide: document that provides information about each of your
original data files. Applies especially to obtained data files
A bibliographic citation of the original data files, including the date you
downloaded or obtained the original data files and unique identifiers that
have been assigned to the original data files.
Information about how to obtain a copy of the original data file
Whatever additional information to understand and use the data in the
original data file
1.1.2.1. Supplements
Additional information about an original data file that’s not written by
yourself but that is found in existing supplementary documents, such as
users’ guides and code books that accompany the original data file
Organizing your data in folders #3
based on the TIER documentation protocol
10. Organizing your data in folders #4
based on the TIER documentation protocol
1. Main project folder (name of your research project/working title of your paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
1.1.2.1. Supplements
1.2. Processing and analysis files
1.2.1. Importable data files (the data you work with)
A corresponding version for each of the original data files. This version can be
identical to the original version, or in some cases it will be a modified version.
For example modifications required to allow your software to read the file
(converting the file to another format, removing explanatory notes from a
table…).
The original and importable versions of a data file should be given different
names
The importable data file should be as nearly as identical as possible to the
original
The changes you make to your original data files to create the corresponding
importable data files should be described in a Readme file
1.2.2. Command files
1.2.3. Analysis files
11. Organizing your data in folders #5
based on the TIER documentation protocol
1. Main project folder (name of your research project/working title of your paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
1.1.2.1. Supplements
1.2. Processing and analysis files
1.2.1. Importable data files
1.2.2. Command files
One or more files containing code written in the syntax of the (statistical)
software you use for the study
Importing phase: commands to import or read the files and save them in a
format that suits your software
Processing phase: commands that execute all the processing required to
transform the importable version of your files into the final data files that
you will use in your analysis (i.e. cleaning, recoding, joining two or more
data files, dropping variables or cases, generating new variables)
Generating the results: commands that open the analysis data file(s), and
then generate the results reported in your paper.
1.2.3. Analysis files
12. Organizing your data in folders #6
based on the TIER documentation protocol
1. Main project folder (name of your research project/working title of your paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
1.1.2.1. Supplements
1.2. Processing and analysis files
1.2.1. Importable data files
1.2.2. Command files
1.2.3. Analysis files
The fully cleaned and processed data files that you use to generate the
results reported in your paper in your paper
The Data Appendix: codebook for your analysis data files: brief description
of the analysis data file(s), a complete definition of each variable (including
coding and/or units of measurement), the name of the original data files
from which the variable was extracted, the number of valid observations for
the variable, and the number of cases with missing values
13. Organizing your data in folders #7
based on the TIER documentation protocol
1. Main project folder (name of your research project/working title of your paper)
1.1. Original data and metadata
1.1.1. Original data
1.1.2. Metadata
1.1.2.1. Supplements
1.2. Processing and analysis files
1.2.1. Importable data files
1.2.2. Command files
1.2.3. Analysis files
1.3. Documents
An electronic copy of your complete final paper
The Readme-file for your replication documentation
What statistical software or other computer programs are needed to run the
command files
Explain the structure of the hierarchy of folders in which the documentation is
stored
Describe precisely any changes you made to your original data files to create
the corresponding importable data files
Step-by-step instructions for using your documentation to replicate the
statistical results reported in your paper
14. 1. File naming conventions: https://lib.stanford.edu/data-management-services/file-naming
2. File organization: http://www.wageningenur.nl/web/file?uuid=3f974938-79a0-421f-b1ad-
95eef49d777c&owner=c057b578-4a6a-4449-881b-17fff17e2f1a (paragraph 6, example 1)
3. File organization: Haselager, dr. G.J.T. , Aken, prof. dr. M.A.G. van (2000): Personality and Family
Relationships. DANS. http://dx.doi.org/10.17026/dans-xk5-y7vc (Data guide, p. 24-26)
4. Version control: http://www.data-archive.ac.uk/create-manage/format/versions
5. Storage, back up of data: http://www.data-archive.ac.uk/create-manage/storage
6. Local ICT infrastructure: https://intranet.tue.nl/en/university/services/ict-services/ict-service-
catalog/management-services/data-management-storage/ (TU/e intranet)
7. DataverseNL: https://dataverse.nl/dvn/
8. TIER documentation protocol: http://www.projecttier.org/
URL’s of mentioned webpages
in order of appearance