Releases: ppdebreuck/modnet
v0.4.4
v0.4.3
What's Changed
- fixed bugged conditionals in evaluate() by @kyledmiller in #210
- Add simple test for evaluate by @ml-evs in #211
New Contributors
- @kyledmiller made their first contribution in #210
Full Changelog: v0.4.2...v0.4.3
v0.4.2
What's Changed
- Deprecated
BayesianMODNetModel
and update deps by @ml-evs in #182 - Fix issue with
fit_preset
invoking fit incorrectly during refit by @ml-evs in #181 - 3.10 compatibility by @ppdebreuck in #198
- Improve evaluate (custom loss, ...) by @ppdebreuck in #194
- Drop Python 3.8 and update other deps by @ml-evs in #201
- Bump matminer version by @ml-evs in #199
- Attempt at bumping pymatgen and matminer by @ml-evs in #203
- Backwards compatibility of test data with pymatgen by @ml-evs in #206
- Properly handle Bayesian model import failure by @ml-evs in #207
Full Changelog: v0.4.1...v0.4.2
v0.4.1
What's Changed
- Fixed refit=0 in FitGenetic, it behaves as before (ensemble of 10 best architecture, ensembled over the
nested
(default 5) folds) - Bump pymatgen from 2023.1.30 to 2023.7.20, compatible with cython 3
v0.4.0
What's Changed
-
/!\ New default model architecture
v0.4.0 changes the default architecture of all MODNet models. It is now possible to predict vectors (previously one had to make individual joint learned properties - which can be slow when the output dimensionality is high), while keeping the joint-learning architecture. In essence, the architecture is moving to joint-learning on vectors.
Previously saved models are still compatible and will be loaded following the old architecture. Please consider retraining your saved models in the near future as modnet will transition to v1.0 without support of the old model architecture.
See #89 and #155 by @ppdebreuck -
Possibility to remove or not fully NaNs features
by @gbrunin in #157
Full Changelog: v0.3.1...v0.4.0
v0.3.1
What's Changed
- Fix backward compatibility. Models <v0.3 can safely be loaded. By @ppdebreuck in #153
- Tweak README by @ml-evs in #154
Full Changelog: v0.3.0...v0.3.1
v0.3.0
What's Changed
-
Impute missing values by @gbrunin in #149
After the featurization, the NaNs are not replaced by 0 anymore. The infinite values are replaced by NaNs. Then, the NaNs are handled when fitting the model using a SimpleImputer which can be chosen. It is then stored as an attribute to the model, and can be re-used when predicting new values. The scaler can also be chosen (StandardScaler or MinMaxScaler), and the user can also choose to first impute then scale, or first scale then impute. Both can be argued (do we want to keep the same distribution as the initial feature, or to change it by moving the NaNs outside the distribution). -
New featurizer presets by @gbrunin in #150
The full list of featurizers are:
- DeBreuck2020Featurizer,
- CompositionOnlyFeaturizer,
- Matminer2023Featurizer,
- MatminerAll2023Featurizer,
- CompositionOnlyMatminer2023Featurizer,
- CompositionOnlyMatminerAll2023Featurizer,
It also adds the possibility to use only features that are continuous with respect to the composition. Some features are by their nature not continuous, which can lead to unphysical discontinuities when predicting a property as a function of the materials composition.
- Introducing better customisation by @ppdebreuck in #148
- Running feature selection only on a subset of properties present in the MODData.
feature_ selection()
now enables this withignore_names
. - By default,
FitGenetic
will proceed by using joint-learning when multiple targets are given in the MODData. This can now be avoided by usingignore_names
inFitGenetic()
. MODNetModel.fit()
can take optionalfit_params
that are passed through to Kerasmodel.fit()
.fit_params
can also be passed toFitGenetic.run()
MODNetModel.fit()
can take a custom loss function.FitGenetic()
can take a custom loss function.- Custom data can be passed trough
MODNetModel.fit()
. It will be appended to the targets (axis=-1). This can be useful for defining custom loss functions. - Any property called
custom_data
in FitGenetic is ignored, and appended to the targets (axis=-1). This can be useful for defining custom loss functions.
- Add get_params and set_params to the MODNet model by @gbrunin in #151
This includes renaming of the the EnsembleMODNetModel ´modnet_models´ arg to ´models´
New Contributors 🎉
Full Changelog: v0.2.1...v0.3.0
v0.2.1
v0.2.0
What's Changed
- Add new default feature preset and updates for new
matminer
&pymatgen
versions by @ml-evs in #101 - Bump tensorflow from 2.10.0 to 2.10.1 by @dependabot in #112
- fix verbosity by @ppdebreuck in #128
- Replace deprecated NumPy and Tensorflow calls by @ml-evs in #123
- Add mode where each featurizer is applied individually by @ml-evs in #127
Full Changelog: v0.1.13...v0.2.0