Releases: neuralhydrology/neuralhydrology
Releases · neuralhydrology/neuralhydrology
v.1.5.0
v.1.4.0
Updates/Fixes
- Adapted CAMELS-CL data loader to new data layout of "Enero 2022" dataset version.
- Resolved np.int deprecation warnings
- In LamaH discharge loader, replace -999 values (invalid observation marker) with np.nan, when loading data.
- Run scheduler now moves processed configs to newly created sub-directory, which makes it easier to continue the scheduler in case it fails at any point.
Additions
- Added new metric that checks if a model misses or hits a flood peak.
Internal, but worth to mention
- Dataset objects now contain the date that corresponds to the target(s).
First time contributors
v.1.3.0
New Features:
- Added new dataset classes for the CAMELS-BR, CAMELS-AUS, and LamaH-CE dataset.
- Added the AR-LSTM (as proposed in this paper). This model can be used by setting the config argument
model: arlstm
. Please refer to the documentation about specific requirements of this model.- Added an option
random_holdout_from_dynamic_features
to the config that applies dropout on the time series features by sampling two Bernoulli processes with different rate parameters. This is used for training thearlstm
to simulate missing (autoregressive) inputs already during training, which are then replaced by the model outputs of previous timesteps.
- Added an option
Fixes:
- A bug when writing the metrics.csv file during validation if the first basin (in the basin list) has no validation data.
v.1.2.5
Bugfixes
- fixed
InputLayer
that was misbehaving when using single frequency data that was resampled from a different temporal resolution (from raw data). - fixed severe bug when using
use_basin_id_encoding: True
. Essentially during training, all samples in a batch had the one-hot vector of the last sample in the batch.
Updates
- Removed
prediction_interval_plots
as it was a duplicate ofpercentile_plot
. Also updated docstring ofpercentile_plot
v.1.2.4
Fixes
- Beta-KGE was not in the list of available metrics returned by
neuralhydrology.evaluation.metrics.get_available_metrics()
. Equally, it was not computed inneuralhydrology.evaluation.metrics.calculate_all_metrics()
andneuralhydrology.evaluation.metrics.calculate_metrics()
raised an ValueError because of an unknown metric
Other
- Added CITATION.cff for the JOSS paper
v.1.2.3 JOSS paper publication
Updates
- removed broken cuda9.2 environment and added a new cuda11.3 environment
- updated docs of GenericDataset and description in Tutorial 3
v.1.2.2
New
- NeuralHydrology can now be installed from PyPI (we added automatic upload to PyPI for every tagged released). #66
Fixes
- Some typo and type annotation fixes in the Transformer model
Updates
- Updated installation guide to include info about PyPI installation options
- Added missing config argument
allow_subsequent_nan_losses
to config docs
v.1.2.1
Updates and Additions
- Added tutorial that explains the download process for data that is required to run our tutorials locally. #68
- Updated tutorials better highlight the data requirements. #68
- Update tutorials to be easier to run on CPU environments without changes of the config/code #68
- Added templates for opening an issue
- Fixed typos
Fixes
- Fixed problem of loading hourly streamflow data from csv (CAMELS US dataset) #67
- Fixed problems with logging the git hash that made the commit hash appear badly formatted
v.1.2.0
New Feature
- If you use NeuralHydrology as a git repository and have uncommited changes in your local copy, we added an option to include the
git diff
in the run directory. Setsave_git_diff
toTrue
to make use of this feature.
Fixes
- fixed a bug from v.1.1.0 that caused problems when evaluating basins with all-NaN targets
Other
- Added guide for contributing to NeuralHydrology
- Update the tutorials to include to the underlying jupyter notebooks and removed full paths from some notebooks.
- Updated the quickstart guide to include a better guide for setting up NeuralHydrology
Additional loss logging and some language fixes
New Features
- Besides logging validation metrics, now also the validation loss is logged to tensorboard (always), computed as the weighted (by number of batches) average loss across all basins.
Fixes
- Spelling mistakes