You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Monotonicity constraint is super useful when you have certain domain knowledge that "all else equal, a higher value in feature X should increase or decrease output Y". This is implemented in most frameworks including scikit-learn, xgboost, lightgbm.
I'm not entirely sure how this would be implemented.
From the PR that add this feature to scikit-learn
(scikit-learn/scikit-learn#13649), it seems the
splitting criterion has to implement check_monotonicity,
clip_node_value, middle_value.
Currently, LogrankCriterion does not implement these methods.
Monotonicity constraint is super useful when you have certain domain knowledge that "all else equal, a higher value in feature X should increase or decrease output Y". This is implemented in most frameworks including scikit-learn, xgboost, lightgbm.
I think this would be a great add.
Here is scikit-learn docs for
monotonic_cst
:The text was updated successfully, but these errors were encountered: