+
Skip to content

add warning that validation scores of graphlearner are prefixed by PO id #881

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 18, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion book/chapters/chapter15/predsets_valid_inttune.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ Internally, this will train XGBoost with 10 different values of `eta` and the `n

When combining internal tuning with hyperparameter optimization via `r ref_pkg("mlr3tuning")` we need to specify two performance metrics: one for the internal tuning and one for the `Tuner`. For this reason, `mlr3` requires the internal tuning metric to be set explicitly, even if a default value exists. There are two ways to use the same metric for both types of hyperparameter optimization:

1. Use `msr("internal_valid_scores", select = <id>)`, i.e. the final validation score, as the tuning measure. As a learner can have multiple internal valid scores, the measure allows us to select one by specifying the `select` argument. We also need to specify whether the measure should be minimized.
1. Use `msr("internal_valid_scores", select = <id>)`, i.e. the final validation score, as the tuning measure. As a learner can have multiple internal valid scores, the measure allows us to select one by specifying the `select` argument. If this is not specified, the first validation measure will be used. We also need to specify whether the measure should be minimized.
2. Set both, the `eval_metric` and the tuning measure to the same metric, e.g. `eval_metric = "error"` and `measure = msr("classif.ce")`. Some learners even allow to set the validation metric to an `mlr3::Measure`. You can find out which ones support this feature by checking their corresponding documentation. One example for this is XGBoost.

The advantage of using the first option is that the predict step can be skipped because the internal validation scores are already computed during training.
Expand All @@ -302,6 +302,10 @@ ti = tune(
)
```

::: {.callout-warning}
When working with a `GraphLearner`, the names of the internal validation scores are prefixed by the ID of the corresponding `PipeOp`, so the `select` parameter needs to be set to `"<pipeop id>.<measure id>"`.
:::

The tuning result contains the best found configuration for both `eta` and `nrounds`.

```{r predsets_valid_inttune-029}
Expand Down
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载