这是indexloc提供的服务,不要输入任何密码
Skip to content

Releases: neuralhydrology/neuralhydrology

v1.12.0

15 Mar 12:47
e0f5261
Compare
Choose a tag to compare

Added

  • Added input replacing, masked mean, and attention support to InputLayer. This allows the models to handle NaN inputs. More details in the preprint How to deal w___ missing input data. Use nan_handling_method and related config arguments to control this behavior. Note that the sample validation remains unchanged, i.e., during training we only allow simulating NaNs through nan_step_probability/nan_sequence_probability.
  • BaseDataset now returns dynamic features as a dictionary and no longer as a merged "x_d" tensor. This makes it much easier to build models like MC-LSTM that handle certain inputs (e.g., precipitation) seperately from the other inputs.
  • MacOS Metal accelerator support (#213, thanks @tberends)
  • Allow changing the run directory for evaluation (#195, thanks @BaptisteFrancois)

Fixes

  • Hourly frequency identifiers changed from H to h. This resolves pandas deprecation warnings.
  • Fixed metrics writing when cfg.metrics is specified as all and when doing multi-target prediction
  • Fixed FLV metrics docstring (#203, thanks @XuHuanHydro)
  • Fixed forecast configuration documentation
  • Fixed forecast overlap regularization
  • Updated CI workflow

v1.11.0

02 Aug 13:13
7cd1846
Compare
Choose a tag to compare

Added

  • Mamba state space model (#163, thanks @taddyb)
  • AdamW optimizer option and ReLU activation function option (#169, thanks @Multihuntr)

Fixes

  • Added missing API docs for hybrid models
  • Fix in the implementation of the SHM hybrid model (thanks @eduardoAcunaEspinoza)
  • Ensure compatibility with Pandas >=2.2 (frequency stuff and items vs. iteritems)
  • Attribute loading in Caravan dataset (#168)

v1.10.0

08 Feb 08:32
2a57873
Compare
Choose a tag to compare

Added

  • HybridModel A wrapper class to combine deep learning models and conceptual hydrology models, where the deep learning model parameterizes the conceptual model, which also needs to be implemented in PyTorch. In the current implementation, the deep learning model is always a standard LSTM, as commonly used in literature.
  • BaseConceptualModel a parent class to facilitate adding new conceptual models into the NeuralHydrology framework.
  • SHM one implementation of a BaseConceptualModel that adds a modified version of the SHM to the modelzoo. See the documentation for details and references about this model.

Fixes

  • A solution that removes the circular import error of #157 As a result of that, load_scaler.py has been moved to datautils/utils.py
  • An update of the documentation to resolve #138
  • Some corner cases in the sampling utils related to #154
  • Minor changes to the Tester class to resolve #158

Huge thanks to @eduardoAcunaEspinoza for contributing the HybridModel, BaseConceptualModel and SHM implementation.

v1.9.1

23 Nov 21:16
f1ac00b
Compare
Choose a tag to compare
  • Fix recursive import issue
  • Fix typos
  • Remove old and broken code.

v.1.9.0

18 Aug 11:36
df8c2a6
Compare
Choose a tag to compare

Added

  • Option to end an epoch early after a given number of update steps. Use max_updates_per_epoch to define a fixed number of update steps per epoch. This can be useful if you train a model with many samples and want to evaluate the model more frequently than just after each full epoch. See also #131
  • Redesign of the config argument experiment_name. You can now add wildcards to the experiment name using curly brackets and any config argument inside of it. E.g. my_awesome_run_{batch_size}_{hidden_size}. Now, if you start training, the wildcards will be replaced with the respective values of the config arguments (here batch size and hidden size), keeping the name of the config argument for easier recognition. To our experience, this makes it easier if you e.g. to some hyper parameter tuning and you don't want to change the experiment name every time but still want to have expressive folder names and run names in Tensorboard. For details, check the documentation.

Fixes

  • Some Pandas Futurewarnings

v.1.8.1

27 Jul 20:16
1a3c313
Compare
Choose a tag to compare

Fixes

  • Fixed #133 and also fixed an issue where the metrics csv file would be empty for multi-frequency runs.
  • Fixes a bug where uncertainty (GMM & CMAL) runs with predict_last_n > 1 would generate incrorrect predictions due to a mixup of dimensions. This was discovered in an MTS-LSTM setting, where the hourly branch has predict_last_n 24. Visually, this resulted in 24-hourly steps in the predictions. UMAL and MCD are unaffected because they sample differently.
  • Fixes and issue with uncertainty runs with active negative sample handling where centering would cut off values below the normalized value of zero (i.e., usually the mean) rather than the actual zero. This commit fixes this by calculating the normalized value of zero from the scaler as the cutoff value. Also includes a faster check for negative values (vectorized torch.any instead of any). Relates also to #88

v.1.8.0

07 Jul 11:05
068135a
Compare
Choose a tag to compare

New Features

  • Option to save all outputs from any model instead of just the target variable (cfg.save_all_validation_output)
  • Several new forecasting models:
    • HandoffForecastLSTM: a forecasting model that uses a state-handoff to transition from a hindcast sequence model to a forecast sequence (LSTM) model.
    • MultiheadForecastLSTM: a forecasting model that runs a sequential (LSTM) model up to the forecast issue time, and then directly predicts a sequence of forecast timesteps without using a recurrent rollout.
    • SequentialForecastLSTM: a forecasting model that uses a single sequential (LSTM) model that rolls out through both the hindcast and forecast sequences.
    • StackedForecastLSTM: a forecasting model that uses two stacked sequential (LSTM) models to handle hindcast vs. forecast.
  • Option to add a timestep counter for forecasting (cfg.timestep_counter)

To use the new forecasting models, there are several new config options, most notably cfg.forecast_inputs and cfg.hindcast_inputs to specify which inputs are used for forecasting vs. for hindcasting. See the documentation for more details.

  • Enable non-verbose mode in trainer and tester (#124)
  • Enable predictions in basins with no observations (#121)

Fixes

  • Error in FDC slope signatures formula (#125)

v.1.7.0

17 May 12:26
08d3559
Compare
Choose a tag to compare

Fixes

  • Handling of weekly time resolutions in the basedataset, see #111
  • Fix issue with UMAL during validation mode, see #114

Note that with this release, the umal_extend_batch method is moved to utils.samplingutils, and training.utils and training.umaltrainer are removed. The UMAL functionality remains unchanged, though.

v.1.6.0

04 Apr 07:51
9dac113
Compare
Choose a tag to compare
  • Fix environment files
  • Fix type annotation that caused #109
  • Added options for weighted regularizations
  • Add regularization loss terms to tensorboard

v.1.5.1

30 Jan 13:48
d38db98
Compare
Choose a tag to compare

Fix

  • Fix in basetrainer.py to resolve problems around finetuning, see #105