-
Notifications
You must be signed in to change notification settings - Fork 835
Description
Hi,
While testing the system, I noticed that some metrics ( e.g. Matching Model Precision@10) show very low values (below 10%).
To rule out issues related to the dataset or configuration, I instantiated the container in playground mode following the official instructions. However, I obtained similar results (as shown in the attached screenshot).
Is this behavior expected? Am i misinterpreting those metrics?
Additionally, I tried modifying some parameters in the config.toml file, and even after restarting the container to ensure the changes were applied, I didn't observe any differences in the metric results.
Is the config.toml supposed to affect evaluation metrics directly, or am I missing a required step to apply the changes?
Finally, I would appreciate some clarification on the metrics terminology:
What exactly do "matching model" and "ranking model" refer to in this context?
Thanks in advance!