SlideShare a Scribd company logo
1 of 81
Introduction to HDF5 Data
Model, Programming Model
and Library APIs
HDF and HDF-EOS Workshop VIII
October 26, 2004

1
Goals
• Introduce HDF5
• Provide a basic knowledge of how data can be
organized in HDF5 & how it is used by
applications.
• To provide some examples of how to read and
write HDF5 files

2
What is HDF5?

3
What is HDF5?
• File format for storing scientific data
– To store and organize all kinds of data
– To share data , to port files from one platform to
another
– To overcome a limit on number and size of the
objects in the file

• Software for accessing scientific data
–
–
–
–
–

Flexible I/O library (parallel, remote, etc.)
Efficient storage
Available on almost all platforms
C, F90, C++ , Java APIs
Tools (HDFView, utilities)

4
Example HDF5
file
“/” (root)
“/foo”

3-D array

lat | lon | temp
----|-----|----12 | 23 | 3.1
15 | 24 | 4.2
17 | 21 | 3.6

Table

palette

Raster image
Raster image

2-D array
Overview
HDF5 Data Model
& I/O Library

6
HDF5 Data Model

7
HDF5 file
• HDF5 file – container for storing scientific
data
• Primary Objects
– Groups
– Datasets

• Additional means to organize data
– Attributes
– Sharable objects
– Storage and access properties

8
HDF5 Dataset
• HDF5 dataset – data array and metadata
• Data array
– ordered collection of identically typed data items
distinguished by their indices

• Metadata
– Dataspace – rank, dimensions, other spatial info
about dataset
– Datatype
– Attribute list – user-defined metadata
– Special storage options – how array is organized

9
Dataset Components

Metadata
Dataspace

Rank

Dimensions

3

Dim_1 = 4
Dim_2 = 5
Dim_3 = 7

Datatype
IEEE 32-bit float

Storage info

Attributes
Time = 32.4

Chunked

Pressure = 987

Compressed

Temp = 56

Data
Dataspaces
• Dataspace – spatial info
about a dataset
– Rank and dimensions
• Permanent part of
dataset definition
– Subset of points, for partial
I/O
• Needed only during I/O
operations

Rank = 2
Dimensions = 4x6

• Apply to datasets in
memory or in the file

11
Sample Mappings between File
Dataspaces
and Memory Dataspaces

(a) Hyperslab from a 2D array to
the corner of a smaller 2D array

(c) A sequence of points from a
2D array to a sequence of points
in a 3D array.

(b) Regular series of blocks from a
2D array to a contiguous sequence at
a certain offset in a 1D array

(d) Union of hyperslabs in file to union
of hyperslabs in memory.

12
Datatypes (array elements)
• Datatype – how to interpret a data element
– Permanent part of the dataset definition

• HDF5 atomic types
–
–
–
–
–
–

normal integer & float
user-definable integer and float (e.g. 13-bit integer)
variable length types (e.g. strings)
pointers - references to objects/dataset regions
enumeration - names mapped to integers
array

• HDF5 compound types
– Comparable to C structs
– Members can be atomic or compound types

13
HDF5 dataset: array of records
3

5

Dimensionality: 5 x 3
int8

int4

int16

Datatype:

Record

2x3x2 array of float32
Attributes
• Attribute – data of the form “name = value”,
attached to an object
• Operations are scaled- down versions of the
dataset operations
– Not extendible
– No compression
– No partial I/O

• Optional for the dataset definition
• Can be overwritten, deleted, added during
the “life” of a dataset

15
Special Storage Options
Better subsetting
access time;
extendable

chunked

Improves storage
efficiency,
transmission speed

compressed

Arrays can be
extended in any
direction

extendable

External
file

Dataset “Fred”

File B

File A
Metadata for Fred

Data for Fred

Metadata in one file,
raw data in another.
Groups
• Group – a mechanism for describing
collections of related objects
“/”

• Every file starts with a
root group
• Can have attributes
• Similar to UNIX
directories,
but cycles are
allowed

17
HDF5 objects are identified and
located by their pathnames

/ (root)
/x
/foo
/foo/temp
/foo/bar/temp

foo
temp

“/”

bar
temp

18

x
Groups & members of groups
can
be shared
tom

P

R

“/”

harry

dick
P

/tom/P
/dick/R
/harry/P
19
HDF5 I/O Library

20
Structure of HDF5 Library

Applications
Object API
Library internals
Virtual file I/O
File or other “storage”
21
Structure of HDF5 Library
Object API (C, Fortran 90, Java, C++)

• Specify objects and transformation and storage properties
• Invoke data movement operations and data transformations

Library internals (C)

• Performs data transformations and other prep for I/O
• Configurable transformations (compression, etc.)

Virtual file I/O (C only)

• Perform byte-stream I/O operations (open/close, read/write, seek)
• User-implementable I/O (stdio, network, memory, etc.)

22
Virtual file I/O layer
•
•

A public API for writing I/O drivers
Allows HDF5 to interface to disk, the network,
memory, or a user-defined device

Virtual file I/O drivers
Stdio

File Family

MPI I/O

Memory

Network

“Storage”

File

File
Family

Memory

Network

23
Intro to
HDF5 API
Programming model for sequential
access

24
Goals
• Describe the HDF5 programming model
• Give a feel for what it’s like to use the
general HDF5 API
• Review some of the key concepts of HDF5

25
General API Topics
•
•
•
•

General info about HDF5 programming
Creating an HDF5 file
Creating a dataset
Writing and reading a dataset

26
The General HDF5 API
• Currently has C, Fortran 90, Java and C++
bindings.
• C routines begin with prefix H5*, where * is a
single letter indicating the object on which the
operation is to be performed.
• Full functionality

Example APIs:

H5D : Dataset interface e.g.. H5Dread
H5F : File interface
e.g.. H5Fopen
H5S : dataSpace interface e.g.. H5Sclose
27
The General Paradigm
• Properties (called creation and access
property lists) of objects are defined
(optional)
• Objects are opened or created
• Objects then accessed
• Objects finally closed

28
Order of Operations
• The library imposes an order on the operations
by argument dependencies
Example: A file must be opened before a
dataset because the dataset open call requires
a file handle as an argument
• Objects can be closed in any order, and
reusing a closed object will result in an error

29
HDF5 C Programming Issues
For portability, HDF5 library has its own defined types:
hid_t:
hsize_t:

object identifiers (native integer)
size used for dimensions (unsigned long or
unsigned long long)
hssize_t: for specifying coordinates and sometimes for
dimensions (signed long or signed long long)
herr_t:
function return value
hvl_t:
variable length datatype
For C, include #include hdf5.h at the top of your HDF5
application.
30
h5dump
Command-line Utility for Viewing HDF5
Files
h5dump [-h] [-bb] [-header] [-a ] [-d <names>] [-g <names>]
[-l <names>] [-t <names>] <file>
-h
Print information on this command.
-header
Display header only; no data is displayed.
-a <names> Display the specified attribute(s).
-d <names> Display the specified dataset(s).
-g <names> Display the specified group(s) and all the members.
-l <names>
Displays the value(s) of the specified soft link(s).
-t <names> Display the specified named datatype(s).
<names> is one or more appropriate object names.

31
Example of h5dump Output
HDF5 "dset.h5" {
GROUP "/" {
DATASET "dset" {
DATATYPE { H5T_STD_I32BE }
DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) }
DATA {
“/”
1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24
‘dset’
}
}
}
}

32
Creating an HDF5 File
Steps to Create a File
1.
2.
3.

Specify File Creation and Access Property
Lists, if necessary
Create a file
Close the file and the property lists, if
necessary

34
Property Lists
• A property list is a collection of values that
can be passed to HDF5 functions at lower
layers of the library
• File Creation Property List
– Controls file metadata
– Size of the user-block, sizes of file data structures,
etc.
– Specifying H5P_DEFAULT uses the default values

• Access Property List
– Controls different methods of performing I/O on
files
– Unbuffered I/O, parallel I/O, etc.
– Specifying H5P_DEFAULT uses the default values.

35
hid_t H5Fcreate (const char *name, unsigned
flags,
hid_t create_id, hid_t
access_id)

name
flags
create_id
access_id

IN:
IN:
IN:
IN:

Name of the file to access
File access flags
File creation property list identifier
File access property list identifier
herr_t H5Fclose (hid_t file_id)

file_id

IN:

Identifier of the file to terminate access to
Example 1
Create a new file using
default properties
1
2

hid_t
herr_t

file_id;
status;

3

file_id = H5Fcreate ("file.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

4

status = H5Fclose (file_id);
Example 1

1
2

hid_t
herr_t

file_id;
status;

3

file_id = H5Fcreate ("file.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

4

status = H5Fclose (file_id);

Terminate access to
the File

39
h5_crtfile.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14

#include <hdf5.h>
#define FILE "file.h5"
main() {
hid_t
herr_t

file_id;
status;

/* file identifier */

/* Create a new file using default properties. */
file_id = H5Fcreate (FILE, H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);
/* Terminate access to the file. */
status = H5Fclose (file_id);
}
Example 1: h5dump Output

HDF5 "file.h5" {
GROUP "/" {
}
}

‘/’

41
Create a Dataset
Dataset Components

Metadata

Data

Dataspace

Rank

Dimensions

3

Dim_1 = 4
Dim_2 = 5
Dim_3 = 7

Datatype
IEEE 32-bit float

Attributes

Storage info

Time = 32.4

Chunked

Pressure = 987

Compressed

Temp = 56

43
Steps to Create a Dataset
1. Obtain location ID where dataset is to be
created
2. Define dataset characteristics (datatype,
dataspace, dataset creation property list,
if necessary)
3. Create the dataset
4. Close the datatype, dataspace, and
property list, if necessary
5. Close the dataset

44
Step 1
Step 1. Obtain the location identifier
where the dataset is to be created

Location Identifier: the file or group identifier in
which to create a dataset

45
Step 2

Step 2. Define the dataset
characteristics
– datatype
(e.g. integer)
– dataspace (2 dimensions: 100x200)
– dataset creation properties (e.g. chunked and
compressed)

46
Standard Predefined Datatypes
Examples:
H5T_IEEE_F64LE Eight-byte, little-endian, IEEE floating-point
H5T_IEEE_F32BE Four-byte, big-endian, IEEE floating point
H5T_STD_I32LE Four-byte, little-endian, signed two's
complement integer
H5T_STD_U16BE Two-byte, big-endian, unsigned integer
NOTE:

• These datatypes

(DT) are the same on all platforms
• These are DT handles generated at run-time
• Used to describe DT in the HDF5 calls
• DT are not used to describe application data buffers

47
Standard Predefined Datatypes
Examples:
H5T_IEEE_F64LE Eight-byte, little-endian, IEEE floating-point
H5T_IEEE_F32BE Four-byte, big-endian, IEEE floating point
H5T_STD_I32LE Four-byte, little-endian, signed two's
complement integer
H5T_STD_U16BE Two-byte, big-endian, unsigned integer
Architecture

Programming
Type

48
Native Predefined Datatypes
Examples of predefined native types in C:
H5T_NATIVE_INT
H5T_NATIVE_FLOAT
H5T_NATIVE_UINT
H5T_NATIVE_LONG
H5T_NATIVE_CHAR

(int)
(float )
(unsigned int)
(long )
(char )

NOTE:
• These datatypes are NOT the same on all platforms
• These are DT handles generated at run-time
49
Dataspaces

• Dataspace: size and shape of dataset and
subset
– Dataset
• Rank: number of dimension
• Dimensions: sizes of all dimensions
• Permanent – part of dataset definition
– Subset
• Size, shape and position of selected elements
• Needed primarily during I/O operations
• Not permanent
• (Subsetting not covered in this tutorial)

• Applies to arrays in memory or in the file
50
Creating a Simple Dataspace
hid_t H5Screate_simple (int rank,
const hsize_t * dims,
const hsize_t *maxdims)
rank
dims
maxdims

IN: Number of dimensions of dataspace
IN: An array of the size of each dimension
IN: An array of the maximum size of each dimension
A value of H5S_UNLIMITED specifies the
unlimited dimension.
A value of NULL specifies that dims and maxdims
are the same.
Dataset Creation Property List
The dataset creation property list contains
information on how to organize data in storage.

Chunked

Chunked &
compressed
52
Property List Example

• Creating a dataset with ``deflate'' compression
create_plist_id = H5Pcreate(H5P_DATASET_CREATE);
H5Pset_chunk(create_plist_id, ndims, chunk_dims);
H5Pset_deflate(create_plist_id, 9);

53
Remaining Steps to Create a
Dataset
3. Create the dataset
4. Close the datatype, dataspace, and
property list, if necessary
5. Close the dataset

54
hid_t H5Dcreate (hid_t loc_id, const char
*name,
hid_t type_id, hid_t
space_id,
hid_t create_plist_id)
loc_id

IN: Identifier of file or group to create the dataset
within
name
IN: The name of (the link to) the dataset to create
type_id
IN: Identifier of datatype to use when creating the
dataset
space_id
IN: Identifier of dataspace to use when creating
the dataset
create_plist_id IN: Identifier of the dataset creation property list (or
H5P_DEFAULT)
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

file_id, dataset_id, dataspace_id;
dims[2];
status;

Create a new file
4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

5
6
7

dims[0] = 4;
dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);

8

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

file_id, dataset_id, dataspace_id;
dims[2];
status;

4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

Create a
5 dataspace 4;
dims[0] =
6
7

dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);

8

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

Create a
5 dataspace 4;
dims[0] =

file_id, dataset_id, dataspace_id;
dims[2];
status;

rank

current dims

6
7

dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);

8

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);

Set maxdims
to current
dims
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

file_id, dataset_id, dataspace_id;
dims[2];
status;

4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

5
6
7

dims[0] = 4;
dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);

Create a dataset
8

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

file_id, dataset_id, dataspace_id;
dims[2];
status;

4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

5
6
7

Pathname
dims[0] = 4;
dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);
Create a dataset

8

Datatype

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

Dataspace

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);

Property list
(default)
Example 2 – Create an empty 4x6 dataset
1
2
3

hid_t
hsize_t
herr_t

file_id, dataset_id, dataspace_id;
dims[2];
status;

4

file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);

5
6
7

dims[0] = 4;
dims[1] = 6;
dataspace_id = H5Screate_simple (2, dims, NULL);

8

dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE,
dataspace_id, H5P_DEFAULT);

Terminate access to dataset, dataspace, &
file

9 status = H5Dclose (dataset_id);
10 status = H5Sclose (dataspace_id);
11 status = H5Fclose (file_id);
Example2: h5dump Output
An empty 4x6 dataset

HDF5 "dset.h5" {
GROUP "/" {
DATASET "dset" {
DATATYPE { H5T_STD_I32BE }
DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) }
DATA {
0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0
}
}
}
}

“/”

‘dset’

62
Writing and Reading Datasets
Dataset I/O
• Dataset I/O involves
– reading or writing
– all or part of a dataset
– Compressed/uncompressed

• During I/O operations data is translated
between the source & destination (filememory, memory-file)
– Datatype conversion
• data types (e.g. 16-bit integer => 32-bit integer)
of the same class
– Dataspace conversion
• dataspace (e.g. 10x20 2d array => 200 1d array)

64
Partial I/O
• Selected elements (called selections) from source
are mapped (read/written) to the selected
elements in destination
• Selection
– Selections in memory can differ from selection in file
– Number of selected elements is always the same in
source and destination

• Selection can be
– Hyperslabs (contiguous blocks, regularly spaced blocks)
– Points
– Results of set operations (union, difference, etc.) on
hyperslabs or points

65
Reading Dataset into Memory from
File
File
2D array of 16-bit ints

Memory
3D array of 32-bit ints

66
Reading Dataset into Memory from
File
File

Memory

2D array of 16-bit ints

3D array of 32-bit ints

2-d array

Regularly
spaced series
of cubes
67
Reading Dataset into Memory from
File
File

Memory

2D array of 16-bit ints

3D array of 32-bit ints

2-d array

Regularly
spaced series
of cubes

The only restriction is that
the number of selected
elements on the left be the
same as on the right.
68
Reading Dataset into Memory from
File
File

Memory

2D array of 16-bit ints

3D array of 32-bit ints

Read

69
Steps for Dataset
Writing/Reading

1.

If necessary, open the file to obtain the file
ID
Open the dataset to obtain the dataset ID
Specify

2.
3.
–
–
–
–
–

4.
5.

Memory datatype
! Library “knows” file datatype – do not need to
specify !
Memory dataspace
File dataspace
Transfer properties (optional)

Perform the desired operation on the
dataset
70
Close dataspace, datatype and property
Data Transfer Property List

The data transfer property list is used to control
various aspects of the I/O, such as caching hints
or collective I/O information.

71
Reading Dataset into Memory from File
File

Memory

2D array with selected rectangle

Dataset

Copy of File Dataspace
_______________________

Selection

I/O hint

Dataset Xfer Prp List

Dataspace
Data

(

H5Dread

Buffer

Datatype
Memory Dataspace
___________________

Memory Datatype

Selection

floats

3D array with selected union of cubes

72

)
hid_t H5Dopen (hid_t loc_id, const char
*name)
loc_id

IN:

name

IN:

Identifier of the file or group in which to
open a dataset
The name of the dataset to access

NOTE: File datatype and dataspace are known when
a dataset is opened
herr_t H5Dwrite (hid_t dataset_id, hid_t
mem_type_id,
hid_t mem_space_id, hid_t
file_space_id,
hid_t xfer_plist_id, const void * buf )
dataset_id
mem_type_id
mem_space_id

IN: Identifier of the dataset to write to
IN: Identifier of memory datatype of the dataset
IN: Identifier of the memory dataspace
(or H5S_ALL)
file_space_id
IN: Identifier of the file dataspace (or H5S_ALL)
xfer_plist_idIN: Identifier of the data transfer properties to use
(or H5P_DEFAULT)
buf
IN: Buffer with data to be written to the file
Example 3 – Writing to an existing dataset
1
2
3

hid_t
herr_t
int

file_id, dataset_id;
status;
i, j, dset_data[4][6];

4
5
6

for (i = 0; i < 4; i++)
for (j = 0; j < 6; j++)
dset_data[i][j] = i * 6 + j + 1;

7
8

file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT);
dataset_id = H5Dopen (file_id, "dset");

9

status = H5Dwrite (dataset_id, H5T_NATIVE_INT,
H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
Example 3 – Writing to an existing dataset
1
2
3

hid_t
herr_t
int

file_id, dataset_id;
status;
i, j, dset_data[4][6];

Initialize buffer
4
5
6

for (i = 0; i < 4; i++)
for (j = 0; j < 6; j++)
dset_data[i][j] = i * 6 + j + 1;

7
8

file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT);
dataset_id = H5Dopen (file_id, "dset");

9

status = H5Dwrite (dataset_id, H5T_NATIVE_INT,
H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
Example 3 – Writing to an existing dataset
1
2
3

hid_t
herr_t
int

file_id, dataset_id;
status;
i, j, dset_data[4][6];

4
5
6

for (i = 0; i < 4; i++)
for (j = 0; j < 6; j++)
dset_data[i][j] = i * 6 + j + 1;

Open existing file and
dataset

7
8

file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT);
dataset_id = H5Dopen (file_id, "dset");

9

status = H5Dwrite (dataset_id, H5T_NATIVE_INT,
H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
Example 3 – Writing to an existing dataset
1
2
3

hid_t
herr_t
int

file_id, dataset_id;
status;
i, j, dset_data[4][6];

4
5
6

for (i = 0; i < 4; i++)
for (j = 0; j < 6; j++)
dset_data[i][j] = i * 6 + j + 1;

7
8

file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT);
dataset_id = H5Dopen (file_id, "dset");

Write to dataset
9

status = H5Dwrite (dataset_id, H5T_NATIVE_INT,
H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
Example 3: h5dump Output
HDF5 "dset.h5" {
GROUP "/" {
DATASET "dset" {
DATATYPE { H5T_STD_I32BE }
DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) }
DATA {
1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24
}
}
}
}

79
For more information…
HDF • HDF website
– http://hdf.ncsa.uiuc.edu/

5

• HDF5 Information Center
– http://hdf.ncsa.uiuc.edu/HDF5/

• HDF Helpdesk
– hdfhelp@ ncsa.uiuc.edu

• HDF users mailing list
– hdfnews@ ncsa.uiuc.edu

80
Thank you

81

More Related Content

What's hot

Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 products
Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 productsInteroperability with netCDF-4 - Experience with NPP and HDF-EOS5 products
Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 productsThe HDF-EOS Tools and Information Center
 
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, Datatypes
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, DatatypesHDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, Datatypes
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, DatatypesThe HDF-EOS Tools and Information Center
 

What's hot (20)

Substituting HDF5 tools with Python/H5py scripts
Substituting HDF5 tools with Python/H5py scriptsSubstituting HDF5 tools with Python/H5py scripts
Substituting HDF5 tools with Python/H5py scripts
 
Introduction to HDF5
Introduction to HDF5Introduction to HDF5
Introduction to HDF5
 
Advanced HDF5 Features
Advanced HDF5 FeaturesAdvanced HDF5 Features
Advanced HDF5 Features
 
The Python Programming Language and HDF5: H5Py
The Python Programming Language and HDF5: H5PyThe Python Programming Language and HDF5: H5Py
The Python Programming Language and HDF5: H5Py
 
Advanced HDF5 Features
Advanced HDF5 FeaturesAdvanced HDF5 Features
Advanced HDF5 Features
 
Introduction to HDF5
Introduction to HDF5Introduction to HDF5
Introduction to HDF5
 
HDF5 Advanced Topics
HDF5 Advanced TopicsHDF5 Advanced Topics
HDF5 Advanced Topics
 
HDF Tools Tutorial
HDF Tools TutorialHDF Tools Tutorial
HDF Tools Tutorial
 
NetCDF and HDF5
NetCDF and HDF5NetCDF and HDF5
NetCDF and HDF5
 
Hdf5 intro
Hdf5 introHdf5 intro
Hdf5 intro
 
Hdf5
Hdf5Hdf5
Hdf5
 
Images of HDF5
Images of HDF5Images of HDF5
Images of HDF5
 
Data Interoperability
Data InteroperabilityData Interoperability
Data Interoperability
 
Adding CF Attributes to an HDF5 File
Adding CF Attributes to an HDF5 FileAdding CF Attributes to an HDF5 File
Adding CF Attributes to an HDF5 File
 
Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 products
Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 productsInteroperability with netCDF-4 - Experience with NPP and HDF-EOS5 products
Interoperability with netCDF-4 - Experience with NPP and HDF-EOS5 products
 
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, Datatypes
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, DatatypesHDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, Datatypes
HDF5 Advanced Topics - Object's Properties, Storage Methods, Filters, Datatypes
 
HDF5 FastQuery
HDF5 FastQueryHDF5 FastQuery
HDF5 FastQuery
 
Introduction to HDF5
Introduction to HDF5Introduction to HDF5
Introduction to HDF5
 
Digital Object Identifiers for EOSDIS data
Digital Object Identifiers for EOSDIS dataDigital Object Identifiers for EOSDIS data
Digital Object Identifiers for EOSDIS data
 
HDF-EOS 2/5 to netCDF Converter
HDF-EOS 2/5 to netCDF ConverterHDF-EOS 2/5 to netCDF Converter
HDF-EOS 2/5 to netCDF Converter
 

Viewers also liked

Viewers also liked (12)

03 data transmission
03 data transmission03 data transmission
03 data transmission
 
Lecture # 03 data collection
Lecture # 03 data collectionLecture # 03 data collection
Lecture # 03 data collection
 
fUML-Driven Performance Analysis through the MOSES Model Library
fUML-Driven Performance Analysisthrough the MOSES Model LibraryfUML-Driven Performance Analysisthrough the MOSES Model Library
fUML-Driven Performance Analysis through the MOSES Model Library
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
organizational structure of a library
organizational structure of a libraryorganizational structure of a library
organizational structure of a library
 
Packet capture in network security
Packet capture in network securityPacket capture in network security
Packet capture in network security
 
Machine Learning and Data Mining: 03 Data Representation
Machine Learning and Data Mining: 03 Data RepresentationMachine Learning and Data Mining: 03 Data Representation
Machine Learning and Data Mining: 03 Data Representation
 
Dewey Decimal Classification Explained
Dewey Decimal Classification ExplainedDewey Decimal Classification Explained
Dewey Decimal Classification Explained
 
Video Steganography
Video SteganographyVideo Steganography
Video Steganography
 
A study on biometric authentication techniques
A study on biometric authentication techniquesA study on biometric authentication techniques
A study on biometric authentication techniques
 
Introduction to C Programming
Introduction to C ProgrammingIntroduction to C Programming
Introduction to C Programming
 
Operating System 2
Operating System 2Operating System 2
Operating System 2
 

Similar to Introduction to HDF5 Data Model, Programming Model and Library APIs

An IDL-Based Validation Toolkit: Extensions to use the HDF-EOS Swath Format
An IDL-Based  Validation Toolkit: Extensions to  use the HDF-EOS Swath FormatAn IDL-Based  Validation Toolkit: Extensions to  use the HDF-EOS Swath Format
An IDL-Based Validation Toolkit: Extensions to use the HDF-EOS Swath FormatThe HDF-EOS Tools and Information Center
 
Hdf5 parallel
Hdf5 parallelHdf5 parallel
Hdf5 parallelmfolk
 
Hadoop security
Hadoop securityHadoop security
Hadoop securityBiju Nair
 
Hops - Distributed metadata for Hadoop
Hops - Distributed metadata for HadoopHops - Distributed metadata for Hadoop
Hops - Distributed metadata for HadoopJim Dowling
 

Similar to Introduction to HDF5 Data Model, Programming Model and Library APIs (20)

Introduction to HDF5
Introduction to HDF5Introduction to HDF5
Introduction to HDF5
 
Parallel HDF5 Introductory Tutorial
Parallel HDF5 Introductory TutorialParallel HDF5 Introductory Tutorial
Parallel HDF5 Introductory Tutorial
 
An IDL-Based Validation Toolkit: Extensions to use the HDF-EOS Swath Format
An IDL-Based  Validation Toolkit: Extensions to  use the HDF-EOS Swath FormatAn IDL-Based  Validation Toolkit: Extensions to  use the HDF-EOS Swath Format
An IDL-Based Validation Toolkit: Extensions to use the HDF-EOS Swath Format
 
Overview of Parallel HDF5
Overview of Parallel HDF5Overview of Parallel HDF5
Overview of Parallel HDF5
 
Hdf5 parallel
Hdf5 parallelHdf5 parallel
Hdf5 parallel
 
H5Coro: The Cloud-Optimized Read-Only Library
H5Coro: The Cloud-Optimized Read-Only LibraryH5Coro: The Cloud-Optimized Read-Only Library
H5Coro: The Cloud-Optimized Read-Only Library
 
HDF Update for DAAC Managers (2017-02-27)
HDF Update for DAAC Managers (2017-02-27)HDF Update for DAAC Managers (2017-02-27)
HDF Update for DAAC Managers (2017-02-27)
 
HDF Cloud: HDF5 at Scale
HDF Cloud: HDF5 at ScaleHDF Cloud: HDF5 at Scale
HDF Cloud: HDF5 at Scale
 
Advanced HDF5 Features
Advanced HDF5 FeaturesAdvanced HDF5 Features
Advanced HDF5 Features
 
Cloud-Optimized HDF5 Files
Cloud-Optimized HDF5 FilesCloud-Optimized HDF5 Files
Cloud-Optimized HDF5 Files
 
Using HDF5 tools for performance tuning and troubleshooting
Using HDF5 tools for performance tuning and troubleshootingUsing HDF5 tools for performance tuning and troubleshooting
Using HDF5 tools for performance tuning and troubleshooting
 
Overview of Parallel HDF5 and Performance Tuning in HDF5 Library
Overview of Parallel HDF5 and Performance Tuning in HDF5 LibraryOverview of Parallel HDF5 and Performance Tuning in HDF5 Library
Overview of Parallel HDF5 and Performance Tuning in HDF5 Library
 
HDF Update
HDF UpdateHDF Update
HDF Update
 
HDF5 and Ecosystem: What Is New?
HDF5 and Ecosystem: What Is New?HDF5 and Ecosystem: What Is New?
HDF5 and Ecosystem: What Is New?
 
HDF for the Cloud - New HDF Server Features
HDF for the Cloud - New HDF Server FeaturesHDF for the Cloud - New HDF Server Features
HDF for the Cloud - New HDF Server Features
 
Parallel Computing with HDF Server
Parallel Computing with HDF ServerParallel Computing with HDF Server
Parallel Computing with HDF Server
 
Performance Tuning in HDF5
Performance Tuning in HDF5 Performance Tuning in HDF5
Performance Tuning in HDF5
 
HDF5 Life cycle of data
HDF5 Life cycle of dataHDF5 Life cycle of data
HDF5 Life cycle of data
 
Hadoop security
Hadoop securityHadoop security
Hadoop security
 
Hops - Distributed metadata for Hadoop
Hops - Distributed metadata for HadoopHops - Distributed metadata for Hadoop
Hops - Distributed metadata for Hadoop
 

More from The HDF-EOS Tools and Information Center

STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...
STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...
STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...The HDF-EOS Tools and Information Center
 

More from The HDF-EOS Tools and Information Center (20)

Accessing HDF5 data in the cloud with HSDS
Accessing HDF5 data in the cloud with HSDSAccessing HDF5 data in the cloud with HSDS
Accessing HDF5 data in the cloud with HSDS
 
The State of HDF
The State of HDFThe State of HDF
The State of HDF
 
Highly Scalable Data Service (HSDS) Performance Features
Highly Scalable Data Service (HSDS) Performance FeaturesHighly Scalable Data Service (HSDS) Performance Features
Highly Scalable Data Service (HSDS) Performance Features
 
Creating Cloud-Optimized HDF5 Files
Creating Cloud-Optimized HDF5 FilesCreating Cloud-Optimized HDF5 Files
Creating Cloud-Optimized HDF5 Files
 
HDF5 OPeNDAP Handler Updates, and Performance Discussion
HDF5 OPeNDAP Handler Updates, and Performance DiscussionHDF5 OPeNDAP Handler Updates, and Performance Discussion
HDF5 OPeNDAP Handler Updates, and Performance Discussion
 
Hyrax: Serving Data from S3
Hyrax: Serving Data from S3Hyrax: Serving Data from S3
Hyrax: Serving Data from S3
 
Accessing Cloud Data and Services Using EDL, Pydap, MATLAB
Accessing Cloud Data and Services Using EDL, Pydap, MATLABAccessing Cloud Data and Services Using EDL, Pydap, MATLAB
Accessing Cloud Data and Services Using EDL, Pydap, MATLAB
 
HDF - Current status and Future Directions
HDF - Current status and Future DirectionsHDF - Current status and Future Directions
HDF - Current status and Future Directions
 
HDFEOS.org User Analsys, Updates, and Future
HDFEOS.org User Analsys, Updates, and FutureHDFEOS.org User Analsys, Updates, and Future
HDFEOS.org User Analsys, Updates, and Future
 
HDF - Current status and Future Directions
HDF - Current status and Future Directions HDF - Current status and Future Directions
HDF - Current status and Future Directions
 
MATLAB Modernization on HDF5 1.10
MATLAB Modernization on HDF5 1.10MATLAB Modernization on HDF5 1.10
MATLAB Modernization on HDF5 1.10
 
HDF for the Cloud - Serverless HDF
HDF for the Cloud - Serverless HDFHDF for the Cloud - Serverless HDF
HDF for the Cloud - Serverless HDF
 
HDF5 <-> Zarr
HDF5 <-> ZarrHDF5 <-> Zarr
HDF5 <-> Zarr
 
Apache Drill and Unidata THREDDS Data Server for NASA HDF-EOS on S3
Apache Drill and Unidata THREDDS Data Server for NASA HDF-EOS on S3Apache Drill and Unidata THREDDS Data Server for NASA HDF-EOS on S3
Apache Drill and Unidata THREDDS Data Server for NASA HDF-EOS on S3
 
STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...
STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...
STARE-PODS: A Versatile Data Store Leveraging the HDF Virtual Object Layer fo...
 
HDF5 Roadmap 2019-2020
HDF5 Roadmap 2019-2020HDF5 Roadmap 2019-2020
HDF5 Roadmap 2019-2020
 
Leveraging the Cloud for HDF Software Testing
Leveraging the Cloud for HDF Software TestingLeveraging the Cloud for HDF Software Testing
Leveraging the Cloud for HDF Software Testing
 
Google Colaboratory for HDF-EOS
Google Colaboratory for HDF-EOSGoogle Colaboratory for HDF-EOS
Google Colaboratory for HDF-EOS
 
HDF-EOS Data Product Developer's Guide
HDF-EOS Data Product Developer's GuideHDF-EOS Data Product Developer's Guide
HDF-EOS Data Product Developer's Guide
 
HDF Status Update
HDF Status UpdateHDF Status Update
HDF Status Update
 

Recently uploaded

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 

Recently uploaded (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 

Introduction to HDF5 Data Model, Programming Model and Library APIs

  • 1. Introduction to HDF5 Data Model, Programming Model and Library APIs HDF and HDF-EOS Workshop VIII October 26, 2004 1
  • 2. Goals • Introduce HDF5 • Provide a basic knowledge of how data can be organized in HDF5 & how it is used by applications. • To provide some examples of how to read and write HDF5 files 2
  • 4. What is HDF5? • File format for storing scientific data – To store and organize all kinds of data – To share data , to port files from one platform to another – To overcome a limit on number and size of the objects in the file • Software for accessing scientific data – – – – – Flexible I/O library (parallel, remote, etc.) Efficient storage Available on almost all platforms C, F90, C++ , Java APIs Tools (HDFView, utilities) 4
  • 5. Example HDF5 file “/” (root) “/foo” 3-D array lat | lon | temp ----|-----|----12 | 23 | 3.1 15 | 24 | 4.2 17 | 21 | 3.6 Table palette Raster image Raster image 2-D array
  • 8. HDF5 file • HDF5 file – container for storing scientific data • Primary Objects – Groups – Datasets • Additional means to organize data – Attributes – Sharable objects – Storage and access properties 8
  • 9. HDF5 Dataset • HDF5 dataset – data array and metadata • Data array – ordered collection of identically typed data items distinguished by their indices • Metadata – Dataspace – rank, dimensions, other spatial info about dataset – Datatype – Attribute list – user-defined metadata – Special storage options – how array is organized 9
  • 10. Dataset Components Metadata Dataspace Rank Dimensions 3 Dim_1 = 4 Dim_2 = 5 Dim_3 = 7 Datatype IEEE 32-bit float Storage info Attributes Time = 32.4 Chunked Pressure = 987 Compressed Temp = 56 Data
  • 11. Dataspaces • Dataspace – spatial info about a dataset – Rank and dimensions • Permanent part of dataset definition – Subset of points, for partial I/O • Needed only during I/O operations Rank = 2 Dimensions = 4x6 • Apply to datasets in memory or in the file 11
  • 12. Sample Mappings between File Dataspaces and Memory Dataspaces (a) Hyperslab from a 2D array to the corner of a smaller 2D array (c) A sequence of points from a 2D array to a sequence of points in a 3D array. (b) Regular series of blocks from a 2D array to a contiguous sequence at a certain offset in a 1D array (d) Union of hyperslabs in file to union of hyperslabs in memory. 12
  • 13. Datatypes (array elements) • Datatype – how to interpret a data element – Permanent part of the dataset definition • HDF5 atomic types – – – – – – normal integer & float user-definable integer and float (e.g. 13-bit integer) variable length types (e.g. strings) pointers - references to objects/dataset regions enumeration - names mapped to integers array • HDF5 compound types – Comparable to C structs – Members can be atomic or compound types 13
  • 14. HDF5 dataset: array of records 3 5 Dimensionality: 5 x 3 int8 int4 int16 Datatype: Record 2x3x2 array of float32
  • 15. Attributes • Attribute – data of the form “name = value”, attached to an object • Operations are scaled- down versions of the dataset operations – Not extendible – No compression – No partial I/O • Optional for the dataset definition • Can be overwritten, deleted, added during the “life” of a dataset 15
  • 16. Special Storage Options Better subsetting access time; extendable chunked Improves storage efficiency, transmission speed compressed Arrays can be extended in any direction extendable External file Dataset “Fred” File B File A Metadata for Fred Data for Fred Metadata in one file, raw data in another.
  • 17. Groups • Group – a mechanism for describing collections of related objects “/” • Every file starts with a root group • Can have attributes • Similar to UNIX directories, but cycles are allowed 17
  • 18. HDF5 objects are identified and located by their pathnames / (root) /x /foo /foo/temp /foo/bar/temp foo temp “/” bar temp 18 x
  • 19. Groups & members of groups can be shared tom P R “/” harry dick P /tom/P /dick/R /harry/P 19
  • 21. Structure of HDF5 Library Applications Object API Library internals Virtual file I/O File or other “storage” 21
  • 22. Structure of HDF5 Library Object API (C, Fortran 90, Java, C++) • Specify objects and transformation and storage properties • Invoke data movement operations and data transformations Library internals (C) • Performs data transformations and other prep for I/O • Configurable transformations (compression, etc.) Virtual file I/O (C only) • Perform byte-stream I/O operations (open/close, read/write, seek) • User-implementable I/O (stdio, network, memory, etc.) 22
  • 23. Virtual file I/O layer • • A public API for writing I/O drivers Allows HDF5 to interface to disk, the network, memory, or a user-defined device Virtual file I/O drivers Stdio File Family MPI I/O Memory Network “Storage” File File Family Memory Network 23
  • 24. Intro to HDF5 API Programming model for sequential access 24
  • 25. Goals • Describe the HDF5 programming model • Give a feel for what it’s like to use the general HDF5 API • Review some of the key concepts of HDF5 25
  • 26. General API Topics • • • • General info about HDF5 programming Creating an HDF5 file Creating a dataset Writing and reading a dataset 26
  • 27. The General HDF5 API • Currently has C, Fortran 90, Java and C++ bindings. • C routines begin with prefix H5*, where * is a single letter indicating the object on which the operation is to be performed. • Full functionality Example APIs: H5D : Dataset interface e.g.. H5Dread H5F : File interface e.g.. H5Fopen H5S : dataSpace interface e.g.. H5Sclose 27
  • 28. The General Paradigm • Properties (called creation and access property lists) of objects are defined (optional) • Objects are opened or created • Objects then accessed • Objects finally closed 28
  • 29. Order of Operations • The library imposes an order on the operations by argument dependencies Example: A file must be opened before a dataset because the dataset open call requires a file handle as an argument • Objects can be closed in any order, and reusing a closed object will result in an error 29
  • 30. HDF5 C Programming Issues For portability, HDF5 library has its own defined types: hid_t: hsize_t: object identifiers (native integer) size used for dimensions (unsigned long or unsigned long long) hssize_t: for specifying coordinates and sometimes for dimensions (signed long or signed long long) herr_t: function return value hvl_t: variable length datatype For C, include #include hdf5.h at the top of your HDF5 application. 30
  • 31. h5dump Command-line Utility for Viewing HDF5 Files h5dump [-h] [-bb] [-header] [-a ] [-d <names>] [-g <names>] [-l <names>] [-t <names>] <file> -h Print information on this command. -header Display header only; no data is displayed. -a <names> Display the specified attribute(s). -d <names> Display the specified dataset(s). -g <names> Display the specified group(s) and all the members. -l <names> Displays the value(s) of the specified soft link(s). -t <names> Display the specified named datatype(s). <names> is one or more appropriate object names. 31
  • 32. Example of h5dump Output HDF5 "dset.h5" { GROUP "/" { DATASET "dset" { DATATYPE { H5T_STD_I32BE } DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } DATA { “/” 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24 ‘dset’ } } } } 32
  • 34. Steps to Create a File 1. 2. 3. Specify File Creation and Access Property Lists, if necessary Create a file Close the file and the property lists, if necessary 34
  • 35. Property Lists • A property list is a collection of values that can be passed to HDF5 functions at lower layers of the library • File Creation Property List – Controls file metadata – Size of the user-block, sizes of file data structures, etc. – Specifying H5P_DEFAULT uses the default values • Access Property List – Controls different methods of performing I/O on files – Unbuffered I/O, parallel I/O, etc. – Specifying H5P_DEFAULT uses the default values. 35
  • 36. hid_t H5Fcreate (const char *name, unsigned flags, hid_t create_id, hid_t access_id) name flags create_id access_id IN: IN: IN: IN: Name of the file to access File access flags File creation property list identifier File access property list identifier
  • 37. herr_t H5Fclose (hid_t file_id) file_id IN: Identifier of the file to terminate access to
  • 38. Example 1 Create a new file using default properties 1 2 hid_t herr_t file_id; status; 3 file_id = H5Fcreate ("file.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 4 status = H5Fclose (file_id);
  • 39. Example 1 1 2 hid_t herr_t file_id; status; 3 file_id = H5Fcreate ("file.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 4 status = H5Fclose (file_id); Terminate access to the File 39
  • 40. h5_crtfile.c 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #include <hdf5.h> #define FILE "file.h5" main() { hid_t herr_t file_id; status; /* file identifier */ /* Create a new file using default properties. */ file_id = H5Fcreate (FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); /* Terminate access to the file. */ status = H5Fclose (file_id); }
  • 41. Example 1: h5dump Output HDF5 "file.h5" { GROUP "/" { } } ‘/’ 41
  • 43. Dataset Components Metadata Data Dataspace Rank Dimensions 3 Dim_1 = 4 Dim_2 = 5 Dim_3 = 7 Datatype IEEE 32-bit float Attributes Storage info Time = 32.4 Chunked Pressure = 987 Compressed Temp = 56 43
  • 44. Steps to Create a Dataset 1. Obtain location ID where dataset is to be created 2. Define dataset characteristics (datatype, dataspace, dataset creation property list, if necessary) 3. Create the dataset 4. Close the datatype, dataspace, and property list, if necessary 5. Close the dataset 44
  • 45. Step 1 Step 1. Obtain the location identifier where the dataset is to be created Location Identifier: the file or group identifier in which to create a dataset 45
  • 46. Step 2 Step 2. Define the dataset characteristics – datatype (e.g. integer) – dataspace (2 dimensions: 100x200) – dataset creation properties (e.g. chunked and compressed) 46
  • 47. Standard Predefined Datatypes Examples: H5T_IEEE_F64LE Eight-byte, little-endian, IEEE floating-point H5T_IEEE_F32BE Four-byte, big-endian, IEEE floating point H5T_STD_I32LE Four-byte, little-endian, signed two's complement integer H5T_STD_U16BE Two-byte, big-endian, unsigned integer NOTE: • These datatypes (DT) are the same on all platforms • These are DT handles generated at run-time • Used to describe DT in the HDF5 calls • DT are not used to describe application data buffers 47
  • 48. Standard Predefined Datatypes Examples: H5T_IEEE_F64LE Eight-byte, little-endian, IEEE floating-point H5T_IEEE_F32BE Four-byte, big-endian, IEEE floating point H5T_STD_I32LE Four-byte, little-endian, signed two's complement integer H5T_STD_U16BE Two-byte, big-endian, unsigned integer Architecture Programming Type 48
  • 49. Native Predefined Datatypes Examples of predefined native types in C: H5T_NATIVE_INT H5T_NATIVE_FLOAT H5T_NATIVE_UINT H5T_NATIVE_LONG H5T_NATIVE_CHAR (int) (float ) (unsigned int) (long ) (char ) NOTE: • These datatypes are NOT the same on all platforms • These are DT handles generated at run-time 49
  • 50. Dataspaces • Dataspace: size and shape of dataset and subset – Dataset • Rank: number of dimension • Dimensions: sizes of all dimensions • Permanent – part of dataset definition – Subset • Size, shape and position of selected elements • Needed primarily during I/O operations • Not permanent • (Subsetting not covered in this tutorial) • Applies to arrays in memory or in the file 50
  • 51. Creating a Simple Dataspace hid_t H5Screate_simple (int rank, const hsize_t * dims, const hsize_t *maxdims) rank dims maxdims IN: Number of dimensions of dataspace IN: An array of the size of each dimension IN: An array of the maximum size of each dimension A value of H5S_UNLIMITED specifies the unlimited dimension. A value of NULL specifies that dims and maxdims are the same.
  • 52. Dataset Creation Property List The dataset creation property list contains information on how to organize data in storage. Chunked Chunked & compressed 52
  • 53. Property List Example • Creating a dataset with ``deflate'' compression create_plist_id = H5Pcreate(H5P_DATASET_CREATE); H5Pset_chunk(create_plist_id, ndims, chunk_dims); H5Pset_deflate(create_plist_id, 9); 53
  • 54. Remaining Steps to Create a Dataset 3. Create the dataset 4. Close the datatype, dataspace, and property list, if necessary 5. Close the dataset 54
  • 55. hid_t H5Dcreate (hid_t loc_id, const char *name, hid_t type_id, hid_t space_id, hid_t create_plist_id) loc_id IN: Identifier of file or group to create the dataset within name IN: The name of (the link to) the dataset to create type_id IN: Identifier of datatype to use when creating the dataset space_id IN: Identifier of dataspace to use when creating the dataset create_plist_id IN: Identifier of the dataset creation property list (or H5P_DEFAULT)
  • 56. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t file_id, dataset_id, dataspace_id; dims[2]; status; Create a new file 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 5 6 7 dims[0] = 4; dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); 8 dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id);
  • 57. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t file_id, dataset_id, dataspace_id; dims[2]; status; 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); Create a 5 dataspace 4; dims[0] = 6 7 dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); 8 dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id);
  • 58. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); Create a 5 dataspace 4; dims[0] = file_id, dataset_id, dataspace_id; dims[2]; status; rank current dims 6 7 dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); 8 dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id); Set maxdims to current dims
  • 59. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t file_id, dataset_id, dataspace_id; dims[2]; status; 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 5 6 7 dims[0] = 4; dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); Create a dataset 8 dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id);
  • 60. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t file_id, dataset_id, dataspace_id; dims[2]; status; 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 5 6 7 Pathname dims[0] = 4; dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); Create a dataset 8 Datatype dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); Dataspace 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id); Property list (default)
  • 61. Example 2 – Create an empty 4x6 dataset 1 2 3 hid_t hsize_t herr_t file_id, dataset_id, dataspace_id; dims[2]; status; 4 file_id = H5Fcreate ("dset.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); 5 6 7 dims[0] = 4; dims[1] = 6; dataspace_id = H5Screate_simple (2, dims, NULL); 8 dataset_id = H5Dcreate(file_id,"dset",H5T_STD_I32BE, dataspace_id, H5P_DEFAULT); Terminate access to dataset, dataspace, & file 9 status = H5Dclose (dataset_id); 10 status = H5Sclose (dataspace_id); 11 status = H5Fclose (file_id);
  • 62. Example2: h5dump Output An empty 4x6 dataset HDF5 "dset.h5" { GROUP "/" { DATASET "dset" { DATATYPE { H5T_STD_I32BE } DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } DATA { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } } } } “/” ‘dset’ 62
  • 64. Dataset I/O • Dataset I/O involves – reading or writing – all or part of a dataset – Compressed/uncompressed • During I/O operations data is translated between the source & destination (filememory, memory-file) – Datatype conversion • data types (e.g. 16-bit integer => 32-bit integer) of the same class – Dataspace conversion • dataspace (e.g. 10x20 2d array => 200 1d array) 64
  • 65. Partial I/O • Selected elements (called selections) from source are mapped (read/written) to the selected elements in destination • Selection – Selections in memory can differ from selection in file – Number of selected elements is always the same in source and destination • Selection can be – Hyperslabs (contiguous blocks, regularly spaced blocks) – Points – Results of set operations (union, difference, etc.) on hyperslabs or points 65
  • 66. Reading Dataset into Memory from File File 2D array of 16-bit ints Memory 3D array of 32-bit ints 66
  • 67. Reading Dataset into Memory from File File Memory 2D array of 16-bit ints 3D array of 32-bit ints 2-d array Regularly spaced series of cubes 67
  • 68. Reading Dataset into Memory from File File Memory 2D array of 16-bit ints 3D array of 32-bit ints 2-d array Regularly spaced series of cubes The only restriction is that the number of selected elements on the left be the same as on the right. 68
  • 69. Reading Dataset into Memory from File File Memory 2D array of 16-bit ints 3D array of 32-bit ints Read 69
  • 70. Steps for Dataset Writing/Reading 1. If necessary, open the file to obtain the file ID Open the dataset to obtain the dataset ID Specify 2. 3. – – – – – 4. 5. Memory datatype ! Library “knows” file datatype – do not need to specify ! Memory dataspace File dataspace Transfer properties (optional) Perform the desired operation on the dataset 70 Close dataspace, datatype and property
  • 71. Data Transfer Property List The data transfer property list is used to control various aspects of the I/O, such as caching hints or collective I/O information. 71
  • 72. Reading Dataset into Memory from File File Memory 2D array with selected rectangle Dataset Copy of File Dataspace _______________________ Selection I/O hint Dataset Xfer Prp List Dataspace Data ( H5Dread Buffer Datatype Memory Dataspace ___________________ Memory Datatype Selection floats 3D array with selected union of cubes 72 )
  • 73. hid_t H5Dopen (hid_t loc_id, const char *name) loc_id IN: name IN: Identifier of the file or group in which to open a dataset The name of the dataset to access NOTE: File datatype and dataspace are known when a dataset is opened
  • 74. herr_t H5Dwrite (hid_t dataset_id, hid_t mem_type_id, hid_t mem_space_id, hid_t file_space_id, hid_t xfer_plist_id, const void * buf ) dataset_id mem_type_id mem_space_id IN: Identifier of the dataset to write to IN: Identifier of memory datatype of the dataset IN: Identifier of the memory dataspace (or H5S_ALL) file_space_id IN: Identifier of the file dataspace (or H5S_ALL) xfer_plist_idIN: Identifier of the data transfer properties to use (or H5P_DEFAULT) buf IN: Buffer with data to be written to the file
  • 75. Example 3 – Writing to an existing dataset 1 2 3 hid_t herr_t int file_id, dataset_id; status; i, j, dset_data[4][6]; 4 5 6 for (i = 0; i < 4; i++) for (j = 0; j < 6; j++) dset_data[i][j] = i * 6 + j + 1; 7 8 file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT); dataset_id = H5Dopen (file_id, "dset"); 9 status = H5Dwrite (dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
  • 76. Example 3 – Writing to an existing dataset 1 2 3 hid_t herr_t int file_id, dataset_id; status; i, j, dset_data[4][6]; Initialize buffer 4 5 6 for (i = 0; i < 4; i++) for (j = 0; j < 6; j++) dset_data[i][j] = i * 6 + j + 1; 7 8 file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT); dataset_id = H5Dopen (file_id, "dset"); 9 status = H5Dwrite (dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
  • 77. Example 3 – Writing to an existing dataset 1 2 3 hid_t herr_t int file_id, dataset_id; status; i, j, dset_data[4][6]; 4 5 6 for (i = 0; i < 4; i++) for (j = 0; j < 6; j++) dset_data[i][j] = i * 6 + j + 1; Open existing file and dataset 7 8 file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT); dataset_id = H5Dopen (file_id, "dset"); 9 status = H5Dwrite (dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
  • 78. Example 3 – Writing to an existing dataset 1 2 3 hid_t herr_t int file_id, dataset_id; status; i, j, dset_data[4][6]; 4 5 6 for (i = 0; i < 4; i++) for (j = 0; j < 6; j++) dset_data[i][j] = i * 6 + j + 1; 7 8 file_id = H5Fopen ("dset.h5", H5F_ACC_RDWR, H5P_DEFAULT); dataset_id = H5Dopen (file_id, "dset"); Write to dataset 9 status = H5Dwrite (dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data);
  • 79. Example 3: h5dump Output HDF5 "dset.h5" { GROUP "/" { DATASET "dset" { DATATYPE { H5T_STD_I32BE } DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } DATA { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24 } } } } 79
  • 80. For more information… HDF • HDF website – http://hdf.ncsa.uiuc.edu/ 5 • HDF5 Information Center – http://hdf.ncsa.uiuc.edu/HDF5/ • HDF Helpdesk – hdfhelp@ ncsa.uiuc.edu • HDF users mailing list – hdfnews@ ncsa.uiuc.edu 80

Editor's Notes

  1. Format and software for scientific data. HDF5 is a different format from earlier versions of HDF, as is the library. Stores images, multidimensional arrays, tables, etc. That is, you can construct all of these different kinds structures and store them in HDF5. You can also mix and match them in HDF5 files according to your needs. Emphasis on storage and I/O efficiency Both the library and the format are designed to address this. Free and commercial software support As far as HDF5 goes, this is just a goal now. There is commercial support for HDF4, but little if any for HDF5 at this time. We are working with vendors to change this. Emphasis on standards You can store data in HDF5 in a variety of ways, so we try to work with users to encourage them to organize HDF5 files in standard ways. Users from many engineering and scientific fields
  2. This shows that you can mix objects of different types according to your needs. Typically, there will be metadata stored with objects to indicate what type of object they are. Like HDF4, HDF5 has a grouping structure. The main difference is that every HDF5 file starts with a root group, whereas HDF4 doesn’t need any groups at all.
  3. Here is an example of a basic HDF5 object. Notice that each element in the 3D array is a record with four values in it.
  4. Like HDF4, HDF5 has a grouping structure. The main difference is that every HDF5 file starts with a root group, whereas HDF4 doesn’t need any groups at all.
  5. Like HDF4, HDF5 has a grouping structure. The main difference is that every HDF5 file starts with a root group, whereas HDF4 doesn’t need any groups at all.
  6. FILE ACCESS MODES:: H5F_ACC_TRUNC: overwrite existing files H5F_ACC_RDWR: open for read and write H5F_ACC_RDONLY: read only H5F_ACC_EXCL: fail if file exists H5F_ACC_DEBUG: print debug info FILE CREATION PROPERTY LIST (H5P_DEFAULT): To control file metadata maintained in the boot block. The parameters that can be modified are: User-Block Size: The user block is a fixed length block of data located at the beginning of the file which is ignored by the HDF5 library and may be used to store any data information found to be useful to applications. This value may e set to any power of two equal to 512 or greater. Offset and Length Sizes: # of bytes to store the offset and length ofobjects in the HDF5 file can be controlled with this parameter. Symbol Table Parameters: The size of symbol table B-trees is controlled by setting the 1/2 rank and 1/2 node size parameters of the B-tree Indexed Storage Parameters: The size of indexed storage B-trees can be controlled by setting the 1/2 rankd and 1/2 node size parameters of the B-tree FILE ACCESS PROPERTY LISTS: used to control different methods of performing I/O on files. Unbuffered I/O - local permanent files can be accessed w/functions in Posix manual (open(), lseek(), read(), close(),write()). Buffered I/O - local permanent files can be accessed w/functions declared in stdio.h header file (fopen(), fseek(), fread(), fwrite(), fclose()). Memory I/O - Local temporary files can be created and accessed directly from memory without ever creating permanent storage. Parallel Files using MPI I/O - allows parallel access to a file through MJPI I/O library. Can modify the MPI communicator, the info object, and the access mode. Data Alignment - sometimes file access is faster if certain things are aligned on file blocks.
  7. FILE ACCESS MODES:: H5F_ACC_TRUNC: overwrite existing files H5F_ACC_RDWR: open for read and write H5F_ACC_RDONLY: read only H5F_ACC_EXCL: fail if file exists H5F_ACC_DEBUG: print debug info FILE CREATION PROPERTY LIST (H5P_DEFAULT): To control file metadata maintained in the boot block. The parameters that can be modified are: User-Block Size: The user block is a fixed length block of data located at the beginning of the file which is ignored by the HDF5 library and may be used to store any data information found to be useful to applications. This value may e set to any power of two equal to 512 or greater. Offset and Length Sizes: # of bytes to store the offset and length ofobjects in the HDF5 file can be controlled with this parameter. Symbol Table Parameters: The size of symbol table B-trees is controlled by setting the 1/2 rank and 1/2 node size parameters of the B-tree Indexed Storage Parameters: The size of indexed storage B-trees can be controlled by setting the 1/2 rankd and 1/2 node size parameters of the B-tree FILE ACCESS PROPERTY LISTS: used to control different methods of performing I/O on files. Unbuffered I/O - local permanent files can be accessed w/functions in Posix manual (open(), lseek(), read(), close(),write()). Buffered I/O - local permanent files can be accessed w/functions declared in stdio.h header file (fopen(), fseek(), fread(), fwrite(), fclose()). Memory I/O - Local temporary files can be created and accessed directly from memory without ever creating permanent storage. Parallel Files using MPI I/O - allows parallel access to a file through MJPI I/O library. Can modify the MPI communicator, the info object, and the access mode. Data Alignment - sometimes file access is faster if certain things are aligned on file blocks.
  8. H5Screate_simple: rank: # of dimensions of dataspace dims: an array of the size of each dimension maxdims: an array of the maximum size of each dimension. NULL: then maxdims is the same as dims. 0: unlimited dimension
  9. H5Dcreate - name - name of dataset or group specified by loc_id data type:
  10. File access property list - H5P_DEFAULT (3rd parameter)