* configuration options for metadata and raw data caches.
* The fields of the structure are discussed individually below:
* version: Integer field containing the version number of this version
* of the H5AC_cache_config_t structure. Any instance of
* H5AC_cache_config_t passed to the cache must have a known
* version number, or an error will be flagged.
* rpt_fcn_enabled: Boolean field used to enable and disable the default
* reporting function. This function is invoked every time the
* automatic cache resize code is run, and reports on its activities.
* reporting function. This function is invoked every time the
* automatic cache resize code is run, and reports on its activities.
* This is a debugging function, and should normally be turned off.
* This is a debugging function, and should normally be turned off.
* open_trace_file: Boolean field indicating whether the trace_file_name
* field should be used to open a trace file for the cache.
* field should be used to open a trace file for the cache.
* The trace file is a debugging feature that allow the capture of
* top level metadata cache requests for purposes of debugging and/or
* optimization. This field should normally be set to FALSE, as
* trace file collection imposes considerable overhead.
* This field should only be set to TRUE when the trace_file_name
* contains the full path of the desired trace file, and either
* there is no open trace file on the cache, or the close_trace_file
* close_trace_file: Boolean field indicating whether the current trace
* file (if any) should be closed.
* file (if any) should be closed.
* See the above comments on the open_trace_file field. This field
* should be set to FALSE unless there is an open trace file on the
* cache that you wish to close.
* trace_file_name: Full path of the trace file to be opened if the
* open_trace_file field is TRUE.
* open_trace_file field is TRUE.
* In the parallel case, an ascii representation of the mpi rank of
* the process will be appended to the file name to yield a unique
* trace file name for each process.
* The length of the path must not exceed H5AC__MAX_TRACE_FILE_NAME_LEN
* evictions_enabled: Boolean field used to either report the current
* evictions enabled status of the cache, or to set the cache's
* evictions enabled status.
* In general, the metadata cache should always be allowed to
* evict entries. However, in some cases it is advantageous to
* disable evictions briefly, and thereby postpone metadata
* writes. However, this must be done with care, as the cache
* can grow quickly. If you do this, re-enable evictions as
* soon as possible and monitor cache size.
* At present, evictions can only be disabled if automatic
* cache resizing is also disabled (that is, ( incr_mode ==
* H5C_incr__off ) && ( decr_mode == H5C_decr__off )). There
* is no logical reason why this should be so, but it simplifies
* implementation and testing, and I can't think of any reason
* why it would be desireable. If you can think of one, I'll
* evictions enabled status of the cache, or to set the cache's
* evictions enabled status.
* In general, the metadata cache should always be allowed to
* evict entries. However, in some cases it is advantageous to
* disable evictions briefly, and thereby postpone metadata
* writes. However, this must be done with care, as the cache
* can grow quickly. If you do this, re-enable evictions as
* soon as possible and monitor cache size.
* At present, evictions can only be disabled if automatic
* cache resizing is also disabled (that is, ( incr_mode ==
* H5C_incr__off ) && ( decr_mode == H5C_decr__off )). There
* is no logical reason why this should be so, but it simplifies
* implementation and testing, and I can't think of any reason
* why it would be desireable. If you can think of one, I'll
* set_initial_size: Boolean flag indicating whether the size of the
* initial size of the cache is to be set to the value given in
* the initial_size field. If set_initial_size is FALSE, the
* initial_size field is ignored.
* initial_size: If enabled, this field contain the size the cache is
* to be set to upon receipt of this structure. Needless to say,
* initial_size must lie in the closed interval [min_size, max_size].
* In PHDF5, all operations that modify metadata must be executed collectively.
* We used to think that this was enough to ensure consistency across the
* metadata caches, but since we allow processes to read metadata individually,
* the order of dirty entries in the LRU list can vary across processes,
* which can result in inconsistencies between the caches.
* PHDF5 uses several strategies to prevent such inconsistencies in metadata,
* all of which use the fact that the same stream of dirty metadata is seen
* by all processes for purposes of synchronization. This is done by
* by all processes for purposes of synchronization. This is done by
* having each process count the number of bytes of dirty metadata generated,
* and then running a "sync point" whenever this count exceeds a user
* and then running a "sync point" whenever this count exceeds a user
* specified threshold (see dirty_bytes_threshold below).
* The current metadata write strategy is indicated by the
* The current metadata write strategy is indicated by the
* metadata_write_strategy field. The possible values of this field, along
* with the associated metadata write strategies are discussed below.
* dirty_bytes_threshold: Threshold of dirty byte creation used to
* synchronize updates between caches. (See above for outline and
* synchronize updates between caches. (See above for outline and
* This value MUST be consistant across all processes accessing the
* file. This field is ignored unless HDF5 has been compiled for
* This value MUST be consistent across all processes accessing the
* file. This field is ignored unless HDF5 has been compiled for
* metadata_write_strategy: Integer field containing a code indicating the
* desired metadata write strategy. The valid values of this field
* are enumerated and discussed below:
* desired metadata write strategy. The valid values of this field
* are enumerated and discussed below:
* H5AC_METADATA_WRITE_STRATEGY__PROCESS_0_ONLY:
* H5AC_METADATA_WRITE_STRATEGY__PROCESS_0_ONLY:
* When metadata_write_strategy is set to this value, only process
* zero is allowed to write dirty metadata to disk. All other
* processes must retain dirty metadata until they are informed at
* a sync point that the dirty metadata in question has been written
* When metadata_write_strategy is set to this value, only process
* zero is allowed to write dirty metadata to disk. All other
* processes must retain dirty metadata until they are informed at
* a sync point that the dirty metadata in question has been written
* When the sync point is reached (or when there is a user generated
* flush), process zero flushes sufficient entries to bring it into
* complience with its min clean size (or flushes all dirty entries in
* the case of a user generated flush), broad casts the list of
* entries just cleaned to all the other processes, and then exits
* When the sync point is reached (or when there is a user generated
* flush), process zero flushes sufficient entries to bring it into
* complience with its min clean size (or flushes all dirty entries in
* the case of a user generated flush), broad casts the list of
* entries just cleaned to all the other processes, and then exits
* Upon receipt of the broadcast, the other processes mark the indicated
* entries as clean, and leave the sync point as well.
* Upon receipt of the broadcast, the other processes mark the indicated
* entries as clean, and leave the sync point as well.
* H5AC_METADATA_WRITE_STRATEGY__DISTRIBUTED:
* H5AC_METADATA_WRITE_STRATEGY__DISTRIBUTED:
* In the distributed metadata write strategy, process zero still makes
* the decisions as to what entries should be flushed, but the actual
* flushes are distributed across the processes in the computation to
* In the distributed metadata write strategy, process zero still makes
* the decisions as to what entries should be flushed, but the actual
* flushes are distributed across the processes in the computation to
* In this strategy, when a sync point is triggered (either by dirty
* metadata creation or manual flush), all processes enter a barrier.
* In this strategy, when a sync point is triggered (either by dirty
* metadata creation or manual flush), all processes enter a barrier.
* On the other side of the barrier, process 0 constructs an ordered
* list of the entries to be flushed, and then broadcasts this list
* to the caches in all the processes.
* On the other side of the barrier, process 0 constructs an ordered
* list of the entries to be flushed, and then broadcasts this list
* to the caches in all the processes.
* All processes then scan the list of entries to be flushed, flushing
* some, and marking the rest as clean. The algorithm for this purpose
* ensures that each entry in the list is flushed exactly once, and
* all are marked clean in each cache.
* All processes then scan the list of entries to be flushed, flushing
* some, and marking the rest as clean. The algorithm for this purpose
* ensures that each entry in the list is flushed exactly once, and
* all are marked clean in each cache.
* Note that in the case of a flush of the cache, no message passing
* is necessary, as all processes have the same list of dirty entries,
* and all of these entries must be flushed. Thus in this case it is
* sufficient for each process to sort its list of dirty entries after
* leaving the initial barrier, and use this list as if it had been
* received from process zero.
* Note that in the case of a flush of the cache, no message passing
* is necessary, as all processes have the same list of dirty entries,
* and all of these entries must be flushed. Thus in this case it is
* sufficient for each process to sort its list of dirty entries after
* leaving the initial barrier, and use this list as if it had been
* received from process zero.
* To avoid possible messages from the past/future, all caches must
* wait until all caches are done before leaving the sync point.
* To avoid possible messages from the past/future, all caches must
* wait until all caches are done before leaving the sync point.
****************************************************************************/
#define H5AC__CURR_CACHE_CONFIG_VERSION 1
#define H5AC__MAX_TRACE_FILE_NAME_LEN 1024
#define H5AC__CURR_CACHE_CONFIG_VERSION 1
#define H5AC__MAX_TRACE_FILE_NAME_LEN 1024
#define H5AC_METADATA_WRITE_STRATEGY__PROCESS_0_ONLY 0
#define H5AC_METADATA_WRITE_STRATEGY__DISTRIBUTED 1
typedef struct H5AC_cache_config_t
/* general configuration fields: */
hbool_t close_trace_file;
char trace_file_name[H5AC__MAX_TRACE_FILE_NAME_LEN + 1];
hbool_t evictions_enabled;
hbool_t set_initial_size;
double min_clean_fraction;