When an option is given, SDS objects that do not belong to any of HDF-EOS2 grids or swaths will be converted. Previously, -nc4strict rejected it, and -nc4 converted without error, but the generated file couldn't be read by netCDF-4. With a new option, users have another option. The converter does not reject it, but creates "fake dimensions" so that netCDF-4 does not complain about the absence of dimensions.
In this context, the "fake" dimension is an empty HDF5 dimensional scale. The fake di...
The buffer size for reference numbers was too small; it was 5 byte long.
The file attached to bugzilla #1373 introduces a five-digits-long reference number. Since the buffer is only 5 bytes long, that big reference number corrupted neighbors. After this change, the buffer will be 32 bytes long. This is neither permanent and reliable solution, but this will be ok for at least next few years.
Force the chunk size to be equal to or smaller than the number of element when creating an HDF5 dataset. (Bugzilla #1373)
An HDF-EOS2 file attached to bugzilla #1373 contains a field whose size is 406 * 271. One weird thing is that its chunk size is 10 * 1354. It seems this is allowed in HDF4, but this is not in HDF5. During H5Dcreate(), I got the following error:
#009: h:\hdf\hdf5-1.8.1\src\h5dint.c line 1209 in H5D_create(): chunk size mus t be <= maximum dimension size for fixed-sized ...
Subtle changes - Drop unnecessary data immediately.
This changelist is related to the previous Revision 341, 342. This will make h4toh5 drop unnecessary data rapidly.
Also, some error handling code was fixed.
Do not use macros defined only if HDF4 is built with netCDF interface.
Two macros MAX_VAR_DIMS and MAX_NC_NAME define by HDF4 have been used. It turned out that these are defined by HDF4 only when netCDF interface is enabled. Since H4H5TOOLS does not need netCDF interface, I changed H4H5TOOLS not to use those macros.
MAX_VAR_DIMS -> H4_MAX_VAR_DIMS
MAX_NC_NAME -> FIELDNAMELENMAX
Related bug:
http://bugzilla.hdfgroup.uiuc.edu/show_bug.cgi?id=1218
Improve EOS2 conversion performance. Previously, all data from every field in a grid (or swath) were read even though they are not always needed. This changelist implements lazy data fetch. If data is not requested, the data is never read.
Improve EOS2 conversion performance. Previously, one whole grid (or swath) structure was read whenever a field is visited, which gives very pure performance. This changelist remembers the last grid (or swath) structure to avoid re-read the whole structure again and again. If a field belongs to the same grid (or swath) that the previous field belongs to, the remembered thing is just reused.
Assuming that one grid (or swath) contains several fields and fields are traversed in DFS manner, a fie...
Fix a bug regarding empty SDS conversion. If SDcheckempty() says an SDS is empty, dimensional scales were not converted. This changelist fixes this so that HDF-EOS2 dimensions are correctly converted into netCDF-4 dimensions.
According to the return value of SDcheckempty(), H4toH5_sds() behaved very differently.
if SDcheckempty() says this SDS is not empty,
create an HDF5 dataset
convert attributes ...
call H4toH5all_dimscale()
else
call convert_sdsfillvalue()
That is, ...
A dimension that is never referred to by any field was omitted. Previously, Swath data suffered from this problem, and that was fixed. This changelist fixed the similar problem happening during Grid conversion.
Many code were arranged, and some unused code was removed.
When check_field_existence() checks whether or not the field is reachable with HDF-EOS2 API, it didn't search Geo-location fields. So, if "Longitude" that resides in Geo-location fields was inquired, this function wrongly said that "Longitude" doesn't exists.
This problem resulted in saying "_FV_Latitude" is neither HDF-EOS2 attribute nor HDF-EOS2 field. "_FV_Latitude" is not reachable by HDF-EOS2 API, but this does not always mean a failure because inserting "_FV_" is the way HDF-EOS2 puts ...
Correctly convert HDF-EOS2 fields of character type.
Previously, the converter converts those fields into netCDF-4 variables of SCALAR spaces. This made netCDF API nc_inq_dim() fails to fetch associated dimensions. Now, all fields of any types will be converted into netCDF-4 variables of SIMPLE spaces with no exception.
Note that textual attributes still need to be of SCALAR space, and this changelist doesn't affect that part.
Add additional test cases to test slash-to-underscore conversion
The previous changelist will convert slashes in SDS, Vgroup and Vdata name into underscores. This changelist will add two additional test cases to test the previous changelist.
- grid_badname.hdf
- swath_badname.hdf
Also, this updates all expected results.
Make EOS2-specific routines handle field and group names with slashes.
Previously, all slashes in SDS were converted into underscores, but this conversion was not done for Vgroup and Vdata. This will make them consistent. To do this, it keeps a pair of the original name and changed name so that any name mangling can be handled in the future.
Change an HDF-EOS2 field name if it contains the slash so that netCDF-4 (HDF5) does not understand it as a delimiter.
An HDF4 SDS representing an HDF-EOS2 fiele does not have this problem because existing code has been handling correctly. This problem happened only if an HDF4 Vdata representing an HDF-EOS2 field is converted into an HDF5 dataset. For this case, converting an HDF4 Vdata is done in the HDF-EOS2 specific routine, and this routine didn't correct the name.
This changelist lets t...
Convert all HDF-EOS2 dimensions even if they are never referred to by any fields.
When a swath field is converted, convert all dimensions first regardless of usage.
Example:
An HDF-EOS2 file from NSIDC has 16 dimensions, but only 15 dimensions are actually used. Previously, the converter converted didn't convert one unused dimension because the conversion is done on-demand.
Update expected results.
All affected files are generated by using the latest HDF5 1.6. This is more recent thatn 1.6.7 release. The following is from svn info
URL: http://svn.hdfgroup.uiuc.edu/hdf5/branches/hdf5_1_6
Repository Root: http://svn.hdfgroup.uiuc.edu/hdf5
Repository UUID: dab4d1a6-ed17-0410-a064-d1ae371a2980
Revision: 15521
REFERENCE_LIST was not updated correctly.
h4toh5 increases the size of REFERENCE_LIST attribute as a dimensional scale is referred to by more datasets. Since the length is fixed, the current implementation removes an existing REFERENCE_LIST attribute (H5Adelete) and create a new REFERENCE_LIST attribute with one more element (H5Acreate). Before calling H5Adelete(), H5Aclose() has not been called, and it resulted in incorrect attribute: elements were omitted.
This implementation is a violati...
Fix two memory bugs in get_chunking_plistid()
1. h4toh5 sds_attr_test.hdf sds_attr_test.h5 ~
Under kagiso, I got the following error: >
*** glibc detected *** corrupted double-linked list: 0x003fb878 ***
<
It turns out that get_chunking_plistid() function assumes that SDgetchunkinfo() fills comp.comp_type, comp.cinfo, which is not true. Especially, the uninitialized comp.comp_type was used for branch, and it introduced unexpected behavior. In get_chunking_plistid(), one possible path coul...
Update expected results for library test cases so that daily tests pass
These files were generated in 2003 with old HDF5. It seems h5diff from HDF5 1.9.14 does not correctly compare this ancient expected file with the tester-generating file. Assuming that this is an HDF5 problem, we're committing new expected files created with HDF5 1.8.1 to avoid daily test failures. When we used HDF5 1.6.7, the same problem occurred. We're guessing HDF5 1.9.14 may have trouble in reading HDF5 1.6 or earlie...
Description:
Updated copyright notices.
In the documentation:
Updated links and THG identity (i.e., NCSA references changed to THG).
Updated release number and date (Release 2.0, May 2008).
Changed "help" link in footers to use image (rather than 'mailto:').
New files: ed_libs/Footer.html, Graphics/help.*
Tested: Firefox
Prevents name conflict by always putting dimensions on the parent group of 'Data Fields'. Originally, dimensions were put on 'Geolocation Fields', and 'Data Fields' group has a hard link to it. When an EOS2 dimension and an EOS2 field have the same name, this way introduces an error because an HDF5 object name should be unique in one group. To fix this problem, the last changelist (r320) puts dimensions on the parent group of 'Data Fields' only if name conflict is detected. However, it still ...
This changelist puts HDF-EOS2 dimension on the parent of 'Data Fields' group, which corresponds to HDF-EOS2 Swath or Grid dataset. This changelist aims to fix name confliction occurring when an HDF-EOS2 field and an HDF-EOS2 dimension share the same name. This makes one problematic NSIDC file converted correctly, but this also has a problem. It will be handled by the next changelist.
Add test cases, too.
Implement _FillValue correctly.
- Always generate _FillValue attribute
EOS2 handling routine stores _FillValue attribute if h4toh5 general routine cannot do this.
- If an EOS2 field is stored as an HDF4 vdata (one-dimensional)
_FillValue is always written in EOS2 handling routine
H5Pset_fill_value() is called inside of EOS2 handling routine
- If an EOS2 field is stored as an HDF4 sds (multi-dimensional)
H5Pset_fill_value() is called by the general routine. (This...
The same name can be used to define different dimensions as long as they reside in different groups. The uniqueness is the assumption h4toh5 made, but this should be changed to support HDF-EOS2 conversion.
NSIDC AMSA AE_SI12 set defines two Grid datasets and each of them defines its own XDim and YDim. This change will make this work.
This change makes h4toh5 recognize the fill value of HDF-EOS2.
HDF-EOS2 stores a fill value as an attribute, which becomes one HDF4 vdata. It has a predefined naming convention; so, if the HDF-EOS2 field name is SWE_NorthernPentad, the vdata name is _FV_SWE_NorthernPentad. Since h4toh5 finds corresponding HDF-EOS2 entries by their names, _FV_ prefix has prevented h4toh5 from matching HDF-EOS2 attributes from HDF4 vdata objects.
With this fix, h4toh5 checks if the name of an HDF4 object star...
Purpose: Fix Windows test script to run EOS tests correctly
Description:
We were checking for the definition of our conditional EOS2 variable incorrectly. This fixes it, and all EOS tests pass.
Tested:
VS2005 on WinXP
Store attributes related to dimensional scale as null-terminated string.
H5DS APIs assumes that "CLASS" attribute is null-terminated, but h4toh5 has stored the attribute without null. "DIMENSION_SCALE" was 15-bytes-long scalar space while H5DS APIs expected "DIMENSION_SCALE\0" which is 16-bytes-long.
Due to this difference, H5DSis_scale() says this is not dimensional scale, and netCDF4 couldn't recognize dimensional scales. Under all platforms except GNU/Linux 64, this fault was not revealed.