|
32 | 32 | "\n",
|
33 | 33 | "- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with the lowest impact on a model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.\n",
|
34 | 34 | "- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended.\n",
|
35 |
| - "- For large datasets, performing hyperparameter optimization can be very computationally expensive. For gradient boosted tree models, one alternative is to use early stopping of the training step. For this, see [EarlyStoppingShapRFECV](#EarlyStoppingShapRFECV)\n", |
| 35 | + "- For large datasets, performing hyperparameter optimization can be very computationally expensive. For gradient boosted tree models, one alternative is to use early stopping of the training step. For this use the parameters early_stopping_rounds and eval_metric.\n", |
36 | 36 | "\n",
|
37 | 37 | "## Setup the dataset\n",
|
38 | 38 | "\n",
|
@@ -11232,13 +11232,13 @@
|
11232 | 11232 | "cell_type": "markdown",
|
11233 | 11233 | "metadata": {},
|
11234 | 11234 | "source": [
|
11235 |
| - "## EarlyStoppingShapRFECV\n", |
| 11235 | + "## Early Stopping ShapRFECV\n", |
11236 | 11236 | "\n",
|
11237 | 11237 | "[Early stopping](https://en.wikipedia.org/wiki/Early_stopping) is a type of regularization, common in [gradient boosted trees](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting). Supported packages are: [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/index.html) and [CatBoost](https://catboost.ai/en/docs/). It consists of measuring how well the model performs after each base learner is added to the ensemble tree, using a relevant scoring metric. If this metric does not improve after a certain number of training steps, the training can be stopped before the maximum number of base learners is reached. \n",
|
11238 | 11238 | "\n",
|
11239 | 11239 | "Early stopping is thus a way of mitigating overfitting in a relatively cheaply, without having to find the ideal regularization hyperparameters. It is particularly useful for handling large datasets, since it reduces the number of training steps which can decrease the modelling time.\n",
|
11240 | 11240 | "\n",
|
11241 |
| - "`EarlyStoppingShapRFECV` is a child of `ShapRFECV` with limited support for early stopping and the example below shows how to use it with LightGBM." |
| 11241 | + "Early Stopping requires parameters early_stopping_rounds eval_metric in `ShapRFECV` class and at the moment only supports the three aforementioned libraries. See the example below how to use it with LightGBM." |
11242 | 11242 | ]
|
11243 | 11243 | },
|
11244 | 11244 | {
|
@@ -192329,12 +192329,12 @@
|
192329 | 192329 | ],
|
192330 | 192330 | "source": [
|
192331 | 192331 | "%%timeit -n 10\n",
|
192332 |
| - "from probatus.feature_elimination import EarlyStoppingShapRFECV\n", |
| 192332 | + "from probatus.feature_elimination import ShapRFECV\n", |
192333 | 192333 | "\n",
|
192334 | 192334 | "model = lightgbm.LGBMClassifier(n_estimators=200, max_depth=3)\n",
|
192335 | 192335 | "\n",
|
192336 | 192336 | "# Run feature elimination\n",
|
192337 |
| - "shap_elimination = EarlyStoppingShapRFECV(\n", |
| 192337 | + "shap_elimination = ShapRFECV(\n", |
192338 | 192338 | " model=search, step=0.2, cv=10, scoring=\"roc_auc\", eval_metric=\"auc\", early_stopping_rounds=5, n_jobs=3\n",
|
192339 | 192339 | ")\n",
|
192340 | 192340 | "report = shap_elimination.fit_compute(X, y)"
|
@@ -192370,7 +192370,7 @@
|
192370 | 192370 | "source": [
|
192371 | 192371 | "As it is hinted in the example above, with large datasets and simple base learners, early stopping can be a much faster alternative to hyperparameter optimization of the ideal number of trees.\n",
|
192372 | 192372 | "\n",
|
192373 |
| - "Note that although `EarlyStoppingShapRFECV` supports hyperparameter search models as input, early stopping is used only during the Shapley value estimation step, and not during hyperparameter search. For this reason, _if you are not using early stopping, you should use the parent class, `ShapRFECV`, instead of `EarlyStoppingShapRFECV`_." |
| 192373 | + "Note that although Early Stopping `ShapRFECV` supports hyperparameter search models as input, early stopping is used only during the Shapley value estimation step, and not during hyperparameter search." |
192374 | 192374 | ]
|
192375 | 192375 | }
|
192376 | 192376 | ],
|
|
0 commit comments