• Skip to sidebar navigation
  • Skip to content

Bitbucket

  • Projects
  • Repositories
    • View all public repositories
  • Help
    • Online help
    • Learn Git
    • Welcome to Bitbucket
    • Keyboard shortcuts
  • Log In
David Young
  1. David Young

vchoi_fork

Vailin Choi
my_hdf5_fork
Public
Actions
  • Clone
  • Download

Learn more about cloning repositories

You have read-only access

Navigation
  • Source
  • Commits
  • Graphs
  • Branches
  • Network
  • Latest Activities

Commits

MuQun Yang
6db1b78950a
MuQun Yang committed debeaf6e64319 Nov 2001
[svn-r4612] 
Purpose:
     A new feature
Description:
    While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way.

    I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required.

    SO by default, this feature will not be turned on.

Solution:

   see the above
Platforms tested:
    linux 2.2.18

Changed files

  • Git repository management for enterprise teams powered by Atlassian Bitbucket
  • Atlassian Bitbucket v4.4.1
  • Documentation
  • Contact Support
  • Request a feature
  • About
  • Contact Atlassian
Atlassian